Javascript As a Compile Target: A Performance Breakthrough

Things are getting more and more interesting in the font-end community. Whenever we think Javascript has reached its limit, something comes out that pushes it to the next level. Mozilla has been working on a couple of interesting research projects that can redefine the performance of the web. Emscripten is one of them. It is a compiler that compile native languages like C/C++ into highly-performant Javascript code. The format for the compiled Javascript is ASM.JS - recently regarded as the Assembly of the Web.

Why compile to Javascript

The browser can only run Javascript - that's a hard truth that will probably never change. Even though Javascript is a fairly fast dynamic typing language, its performance is still not good enough for things like graphic intensive games. The language's dynamic nature is the main reason for the performance drawbacks, specifically:

  • Type inference: Modern Javascript engines infer type at runtime to make the correct memory for machine instructions. For example, Javascript numbers are all 64-bit floating point, but the Just-in-time compiler may attempt to infer the correct type like 32-bit integer (or more like 31-bit signed integer) to speed up run time memory access. This increases JIT compilation time, resulting in slower application startup.
  • Deoptimization/recompilations: Besides type inference, JS engines also do other optimizations involving type guessing and variable caching. But due to the dynamic nature of the language, variable types may change and caches can be invalidated at any point. When that happens, the engine needs to deoptimize and sometimes even recompile to generate better Assembly.
  • Garbage collection: Garbage collection blocks. The more garbage to collect, the slower it is.

Emscripten is set out to address those drawbacks. It compiles native code into highly optimized Javascript, so that Javascript engines don't need to to just-in-time compilation and optimization. With ASM-supported Javascript engine like OdinMonkey, the optimized Javascript can be compiled ahead-of-time and executed directly. The compiled code can even be cached to minimize subsequent startup time.

How does it work

Compiler front-end like Clang compiles native C/C++ code into LLVM bytecode. Emscripten takes the bytecode and turns it into Javascript instead of machine instructions. The default format for the compiled Javascript is ASM.JS. The main idea behind ASM is that it uses typed arrays as virtual memory. Typed arrays are a set of classes designed for working with raw binary data. There are a few pre-defined arrays like Int8Array, Int16Array, Float32Array, Float64Array…Together they make up the virtual heap for every compiled ASM application. Specifically every generated ASM files contain this piece of code to initialize the virtual memory:

var buffer = new ArrayBuffer(TOTAL_MEMORY);
HEAP8 = new Int8Array(buffer);
HEAP16 = new Int16Array(buffer);
HEAP32 = new Int32Array(buffer);
HEAPU8 = new Uint8Array(buffer);
HEAPU16 = new Uint16Array(buffer);
HEAPU32 = new Uint32Array(buffer);
HEAPF32 = new Float32Array(buffer);
HEAPF64 = new Float64Array(buffer);

Now let's look at what happens when compiling the below piece of C++ code using Emscripten:

int main() {
  printf("hello, world!\n");
  return 1;

The generated JS file is 2000 line long. Most of it are internal ASM modules. Here are a couple of interesting parts directly related to the C++ code above:

First is initial memory initialization:

allocate([104,101,108,108,111,44,32,119,111,114,108,100,33,10,0,0], "i8", ALLOC_NONE, Runtime.GLOBAL_BASE);

The method allocate() put an array of data (the character array in this case) of some certain type into the memory heap. The type of this sequence is 8-bit unsigned integer which corresponds to the UInt8Array. ALLOC_NONE tells the method not to allocate on the memory stack just yet. Then in the main function:

function _main() {
  var $1 = 0, $vararg_buffer = 0, $vararg_lifetime_bitcast = 0, label = 0, sp = 0;
  sp = STACKTOP;
  $vararg_buffer = sp;
  $vararg_lifetime_bitcast = $vararg_buffer;
  $1 = 0;
  STACKTOP = sp;return 1;

It calls _printf() with a pointer to the beginning of the string and pointer to the argument list residing on the stack. STACKTOP is the pointer to the current location of the stack in the virtual memory. The _printf function formats the output, write the result onto the stack, and then to stdout. After the method execution finishes, stachRestore() is called to restore the stack's top pointer to the default position. This makes sure stack memory only lasts for 1 execution context and will be overriden in subsequent contexts.

function _fprintf(stream, format, varargs) {
  // int fprintf(FILE *restrict stream, const char *restrict format, ...);
  var result = __formatString(format, varargs);
  var stack = Runtime.stackSave();
  var ret = _fwrite(allocate(result, 'i8', ALLOC_STACK), 1, result.length, stream);
  return ret;

function _printf(format, varargs) {
  // int printf(const char *restrict format, ...);
  var stdout = HEAP32[((_stdout)>>2)];
  return _fprintf(stdout, format, varargs);

This is just a glimpse of what goes on behind the scene of ASM. There are so many more internal modules and libraries included in the generated JS file that's impossible to go through completely. You can find the entire specification for the language here. The spec is not fully implemented yet, but the performance result so far is very promising.

What does the performance look like

According to the benchmarking result by Mozilla, ASM code is about 2x slower than native code running on OdinMonkey, which is comparable to Java and C#. It is promised to get even better up to 70% of native speed after optimizing for float32 operations instead of double64. The result is as follow (lower is better):


Current Javascript engines can also run ASM code but still needs to run it through the interpreter and JIT compiler. With such large amount of generated code, the performance in this case is not that good. It's unlikely that Chrome's V8 will optimize for ASM anytime soon. Therefore, Firefox and Mozilla is quite (slightly) ahead in the web performance race.

The future

Game programers will probably benefit the most from Emscripten and ASM. Currently, native games written in a subset of OpenGL can easily be ported to the browser without any additional effort. You can find some demos here.

As for web developers, I don't think the technologies will have a very huge impact. Normal Javascript is already fast enough, and if correctly written, 99% of web applications can run as smoothly as their native counterparts. But who knows, with such powerful tool in their disposal, creative developers can come up with all sort of crazy things. Maybe we'll see a new generation of higly complex interactive websites that are impossible to do with the current web stack.

Other projects

One thing to note here is that Emscripten and ASM are two separated project. ASM is set out to be the universal Javascript compile target, not just for only Emscripten. There are also other compilers like Mandreel or JSIL that can benefit from the format as well. So far, only Emscripten is using ASM as the default compile target, but other projects' implementation is on the way. I'm particularly interested in compiling LLJS to ASM. If ASM is like Assembly, LLJS is like C++ for writing readable and performant low-level code. LLJS already has its own compile target, but with ASM, its performance can get even better.


Some More JavaScript Weirdness

JavaScript is a pretty fun language with many "weird" behaviors that make developers want to kill themselves. Some behaviors are quite common like variable hoisting or global scope pollution, some are almost unknown to the majority of frontend developers. Below is the list of weird JavaScript features that I know of (and it's certainly not the complete list):

Primitive vs Object

JavaScript primitives are not instances of the associating object types even though they look like so. For example:

var test = "test";
String.prototype.testFunction = function() { return 0; };
console.log(test.testFunction());  // 0

// but...
console.log(test instanceof String);  // false
console.log(test === new String("test"));  // false

So be careful when using String or Number objects. Use primitives wherever possible unless you know what you're doing.


Arrays are also objects and their lengths are calculated as the last array index plus 1. So don't do this:

var arr = [1, 2];
arr[4] = 3;
console.log(arr.length);  // 5

Also it leaves "holes" inside the array which causes some array methods stop working:

var arr = [1, 2];
arr[4] = 3;  // [1, 2, undefined, undefined, 3];

for (var i = 0; i < arr.length; i++) {
  // TypeError: Cannot call method 'toPrecision' of undefined

Array.prototype.sort defaults to the lexicographical comparison function:

[11, 3, 2].sort();  // [11, 2, 3]

Therefore always pass in a comparison function when calling sort():

[11, 3, 2].sort(function(a, b) { return a - b; });


Don't rely on typeof for logic control other than checking for undefined. It outputs some strange stuff:

typeof null;  // "object"
typeof NaN;   // "number"
typeof [];    // "object"


All numbers in JavaScript are IEEE_754 64-bit floating point values which use 53 bits as the mantissa. That means the largest integer you can have is 2^53, not 2^32 or 2^64 like in C or Java. Any operation that stretch beyond the largest or smallest integer will be ignored:

var x = Math.pow(2, 53);
x === x + 1;  // true

Unlike arithmatic operators, bitwise operators only work with int32, so:

var x = Math.pow(2, 53);
x / 2;  // 4503599627370496
x >> 1;  // 0

The examples above are taken from this SO answer.

Truthy and falsey

false, 0, "", null, undefined and NaN evaluate to false. The rest are true. However, the fun starts when we compare those value:

false == 0;         // true
false == "";        // true
0 == "";            // true
null == false;      // false
null == null;       // true
undefined == false; // false
null == undefined;  // true
NaN == false;       // false
NaN == NaN;         // false
1 == true;          // true
[0] == true;        // false

That's why the triple equal operator (===) exists. Always use strict comparison to avoid punching yourself when writing JS code.

If you think you know JavaScript well enough, take this quizz. I only got 11/37 :(


What makes Javascript slow?

This post consolidates some of the most notable frontend performance issues related to Javascript and desktop browsers. For mobile web performance, you can read this article from Sencha.

Is Javascript really slow?

No. To be precise, a programming language neither fast nor slow. It's just a language. What's slow is the interpreter/compiler that the language runs on and the environment it interacts with. Modern Javascript engines are not slow, in fact they're blazing fast and highly optimized compare to other interpreted languages like Python and Ruby. To prove that, let's take the Javascript component out of the browser and see how it does. We can install a Javascript engine like V8 (from Chrome) or SpiderMonkey (from Firefox) directly and run some benchmarking tests. On Mac, the two of them can be trivially installed via Homebrew.

brew install v8
brew install spidermonkey

Let's use V8 as it's the fastest out there at the moment. Here are the results for a test for multiplying two 100x100 matrices:

Python 2.7.3           225ms
Ruby 2.0.0             216ms
Javascript V8          23ms

As you can see, the algorithm, which runs in O(n^3), is much faster in Javascript V8 than Python and Ruby. Now let's take this test and run it on Chrome that has V8 embedded. The result is even more surprising:

Chrome 31.0 (V8 3.21)  11ms

So it looks like V8, which is already very fast, is even more optimized to run on the browser. There are some more comprehensive benchmarking tests that confirm the speed of Javascript. You can take a look here or here.

So Javascript is fast, but why are developers still complaining about its performance?

The number one culprit: The Browser(s)

Javascript is just one part of the browser. There are still two more components that make web applications work: markup and CSS. Let's take a look at each of them:

HTML and the DOM

This is the source of all evil. DOM operations are expensive, for example, let's take a look at this code which creates 5000 DOM elements and adds it to a blank page:

for (var i = 0; i <= 5000; i++) {
  var add = document.createElement('div');
  add.innerHTML = 'Item ' + i;

It has roughly 200 times less operations than multiplying 100x100 matrix but takes 53ms, almost 5 times slower, on Chrome 31.0 with V8 3.21. On older browsers, it's even much worse, especially IE 6-8. So Javascript isn't who to blame here. It's the DOM.

A big problem with the code above is that any changes to the DOM causes repaint and reflow. That means the browser has to re-render the part of the page affected by the DOM changes. As you might expect, this is expensive and should be avoided as much as possible. A general rule of thumb is to minimize DOM transactions, i.e. don't touch the DOM unless you absolutely have to. A good technique is using DocumentFragment to batch appending multiple DOM elements to the page.

var fragment = document.createDocumentFragment();

for (var i = 0; i <= 5000; i++) { 
  var add = document.createElement('div');
  add.innerHTML = 'Item ' + i;


This is treated as only one transaction by the DOM API, and therefore results in only one reflow. However, modern browsers already optimize for this to make our lives a lot easier. But it doesn't matter if our users are still stuck with browser versions from a couple of years ago.


How exactly does CSS work? CSS is simply a style sheet that the browser consults before rendering the DOM on the web page. The key thing to note here is that CSS is consulted after the DOM has been generated. That means it also involves DOM traversal and sometimes can cause performance problems.

Contrary to popular belief, the browser reads CSS rules from right to left, not left to right. For example this rule:

treehead treerow treecell .odd {…}

is read as: look for all elements with class odd, then traverse up the DOM tree, filter out the ones not belong to treecell, and then up to treerow and treehead. For the reason why browsers do that, see this SO answer.

Let's do a quick measurement to see how bad this CSS descendant selector actually is. We can use Chrome's Speed Tracer for this purpose. The result below are obtained from SpeedTracer for a document with 100 divs element with the class odd, only a fraction of which match the desired selector.


And here is the one for the same document but all the desired elements share a custom desired class. The selector is applied to the desired class directly:


It's almost 3 times improvement in style recalculation time. To be fair, most of the time we don't need to care about CSS performance as modern browsers optimize it quite well (the two versions above are not much different in the latest versions of Chrome). But it's always good to follow the good practices especially avoiding descendant and child selectors. Also it's good to run your application through SpeedTracer to identify performance issues early on.

Javascript: The slow parts

As we already know, Javascript is a relatively fast scripting language. Most of the frontend performance problems are caused by the DOM and browser interaction, not the language itself. However, there are features in Javascript that might be problematic if used incorrectly. Below are some of the notable ones:

Prototyping inheritance

Looking up variables in long prototype chains is not a good thing, especially when it's repeated over and over again. So if you find yourself accessing inherited data frequently, it's better to cache the data in local variables. = "Ryan";

var doSomething = function() {
  // Caching
  var name =;

  for (var i = 0; i < 100; i++) {
    // console.log(; // bad

Function scope

Similar to protyping inheritance, looking up data in long function scope chains can also be costly. Again, caching is the key here:

var func1 = function() {
  var name = "Ryan";

  var func2 = function() {
    var nameCache = name;  // Caching

    for (var i = 0; i < 100; i++) {


for…in and forEach loops are quite poor in performance compare to the normal for loop. I personally think for…in should be avoided most of the time as it doesn't provide much benefit. forEach should only be used when you need to make use of the function callback it provides. For most of the time, the good old for loop is sufficient. Also, caching array length can provide some more performance gain:

var length = arr.length;

for (var i = 0; i < length; i++) {
  // Do something

Single threaded

The biggest disadvantage of Javascript is the lack of multi-threaded support. That means heavy computation cannot be split up into concurrent tasks to make it faster. There's nothing developers can do about it, and it's pretty much unneccessary anyways. Frontend development doesn't have to deal with IO, which is the most expensive operation in the concurrent programming world. I've also never run into any algorithmic computation too heavy to be required to split up into multi threads.

Note that optimizing the above Javascript features doesn't provide much performance gain compare to optimizing DOM manipulation. Unless you're working with very old browsers, this shouldn't be much of a concern.


Javascript is probably the most misunderstood language in the world. It's a fast scripting language, much faster than Ruby or Python, but the browser has given Javascript such a bad image. The DOM is slow, and Javascript has done the best it could to offset the many issues with DOM manipulation. The most important thing to take away for frontend developers is to know where and when to touch the DOM and question every DOM manipulation. Also it's crucial to always know your users and the browsers they're using. There's no point in optimizing for performance when the users' browsers already do it for you. For more in depth views on frontend performance optimization, please check out the references below.


Nicholas C. Zakas: Speed Up Your Javascript

Ariya Hidayat and Jarred Nicholls: Hacking WebKit & Its JavaScript Engines

Steve Souders: High Performance Websites

Sencha: 5 Myths About Mobile Web Performance

Google Developers: SpeedTracer Examples

Google Developers: Web Performance Best Practices

Mozilla: Writing efficient CSS


No CoffeeScript for beginners


CoffeeScript is growing in popularity. Taking a tour around the recent web repositories on Github, you'll notice a lot of CoffeeScript being used instead of our good old Javascript. There was a heated debate on whether CoffeeScript is worth it, and the best answer is "You either hate it or love it". However, I think CoffeeScript is designed for programmers who already know Javascript well. For beginners, it actually makes their Javascript learning journey quite a lot harder. CoffeeScript has too much behind-the-scene magic that makes some of the essential Javascript behaviors not very obvious. Let's take a look at 2 of the most important Javascript features, execution context and prototypal inheritance and why CoffeeScript is not a good learning tool in those cases.

Execution context

Execution context is basically the environment the current code is being evaluated in. It is the reason for some prominent Javascript features that are well-beautified by CoffeeScript to the point that they are almost hidden to beginners.

Global context

At the bottom of the execution stack is always the global context which can be accessed by all. The language's dependence on a global context is the ugliest thing about Javascript and should be used with caution. CoffeeScript tries to avoid accidentally adding things to the global context by automatically wrapping all the code in a function call:

(function() {
  // Your code...

as well as automatically adding var to every variable declaration. They are good practices every Javascript programmers should follow, but have become a behind-the-scene magic in CoffeeScript. Lots of beginners don't understand why global context is bad and why CoffeeScript does it that way. I often see people who are too used to CoffeeScript struggle with when and where to use global context when switching back to Javascript. Some of them even forget to use var most of the time. The result is very nasty.

Scope chain and this

The this keyword in Javascript is one of the most confusing things in the language, and most people learn it the hard way. In a nutshell, the value of this is determined during the creation stage of the execution context and refers to the context in which the function is called. It gets more and more confusing in the case of event handler. For example:

var Foo = function() {
  this.handler = function() {
var foo = new Foo();
button.onclick = foo.handler;

this in this case is the button element, not the foo instance. For the handler function to get access to foo`, we can make use of scope chain:

var Foo = function() {
  var _this = this;
  this.handler = function() {
var foo = new Foo();
button.onclick = foo.handler;

Now handler has access to _this which is a reference to foo. CoffeeScript adds some syntactic sugar to this technique by using arrow function:

Foo = ->
  this.handler = =>

It hides away scope chain and closure, which is bad for beginners who haven't fully understood the concepts. I've seen a lot of my friends do this:

foo = 
  handler: =>

button.onclick = foo.handler

Guess what? this now refers to the global context because we cannot define _this in object literal! So It's better to learn the real thing before trying to be smart.

Notes: ES6 is going to introduce built-in arrow function which will not have its this defined in execution context. It allows this to be lexically picked up from the outer context where the function is defined.

Variable hoisting

Another effect of execution context is variable hoisting. Most experienced Javascript programmers are aware that all variables declarations are hoisted to the top of the function. So this behavior is expected:

var foo = "Hello";
(function() {
  console.log(foo);  // undefined because declaration of `foo` is hoisted to the top
  var foo = "Hello World";

CoffeeScript makes this a little easier by automatically taking care of variable duplication:

foo = "Hello"
( ->
  foo = "Hello World"
  bar = "Test"

This translates to:

var foo = "Hello";
(function() {
  var bar;
  console.log(foo);  // "Hello"
  foo = "Hello World";
  return bar = "Test";

Again, this is good for those who already know about variable hoisting. For beginners, having too much magic behind the scene almost makes them blind to this essential Javascript behavior. I can imagine a lot of "WTFs" given when they face unexpected behaviors working on projects without CoffeeScript support.

Prototypal Inheritance

Understanding prototypal inheritance is the gateway to mastering Javascript. The CoffeeScript way of writing classes and inheritances is a double-edge sword for beginners. On one hand, it makes it easier for them to write functional object hierarchy. On the other hand, it deprives them from fully understanding the language's prototypal core. For example:

class Person
  money: 0
tom = new Person()
tom.hasOwnProperty('money')  // false

This looks pretty much like Java, except the fact that tom actually doesn't have money as its own property. Some of my friends with Java background had a very hard time understanding this behavior when looking at the code above. Compared to this native Javascript code:

var Person = function() { }; = 0;
tom = new Person();

It's much clearer now that money is added to the prototype of Person, not directly to tom. Every Javascript programmer must know that the language doesn't have the concept of classes. But CoffeeScript makes this really confusing by adding a too convenient class declaration syntax. The transition from classical to prototypal way of thinking is not going to be easy if inexperienced programmers keep using classes in CoffeeScript like that.

I've been advising my beginner friends to learn as much as possible about Javascript, especially execution context and prototypal inheritance, before writing anything in CoffeeScript . Even better, they should learn all of Javascript "the good parts" and how to write good Javascript code. After all, CoffeeScript is just the good parts of Javascript with some syntactic sugar. If we already write good Javascript code, CoffeeScript is just a matter of preference.


Summary of ECMAScript 6 major features

ECMAScript 6, the new Javascript standard, is going to be released by the end of 2014. Dr. Axel Rauschmayer gave a presentation about its features at the O'Reilly Fluent Conference 2013 in San Francisco. The presentation slides can be found here. Here is my summary of what I think are the most major features of ES6 that benefit the majority of Javascript developers:

Blocked-scope variables

let is going to be included in ES6 which allows blocked-scope variable declaration. So no more "declaring your variables at the top of the function".

Lexical this

ES6 introduces "arrow function", which is a borrowed concept from CoffeeScript. Arrow functions don't have their owned this defined at the point they're called, so this refers to the context in which the functions are defined. For example:

function UiComponent {
  var button = document.getElementById(!#myButton!);
  button.addEventListener(!click!, => () {

this in this case refers to the UIComponent object where the event handler is defined, not the global window object or any arbitrary object that calls the event handler.

Parameter default values

Setting default values for function parameters is now possible:

function(x, y = 3) { ... }

Defining prototypal inheritance in object literal

__proto__ is being included in ES6, making it possible to define prototypal inheritance in object literal:

var obj = {
  __proto__: someOtherObj,
  method: function() { ... }

So no more having to use the long form Object.create() as well as the weird confusing constructor and new pattern.

Introduction of Symbols

symbol is a new kind of primitive value, and each symbol is unique. This allows enum-style values, which have been a major deficiency in the language:

let red = Symbol();
let green = Symbol();
let blue = Symbol();
function handleColor(color) {
  switch (color) {
    case red:
    case green:
    case blue:

Given their uniqueness, symbols can be used as identifiers, such as object property keys:

let specialMethod = Symbol();
let obj = {
  [specialMethod]: function (arg) {

This guarantees no name clashes among object properties that are symbols as well as enables computed property name.

Introduction of Classes

I'm not sure whether it's a good thing to include classes in the Javascript standard, since Javascript is not a "classical" language. But in ES6, it's now possible to do this:

class Point {
  constructor(x, y) {
    this.x = x;
    this.y = y;
  toString() {
    return !(!+this.x+!, !+this.y+!)!;

class ColorPoint extends Point {
  constructor(x, y, color) {
    super(x, y); // same as super.constructor(x, y)
    this.color = color;
  toString() {
    return this.color+! !+super();

This seems to me like Javascript is going back to the conventional classical object oriented way of doing things. Classes hide the language's prototypal core, which makes it more confusing for beginners. It's just my opinion, of course the ECMA committee has a good reason to do so.


This is to me the biggest improvement of the language. ES6 now supports exporting and importing modules across different files. Once the standard is implemented, it's possible to write modular Javascript code without having to use external libraries.

Multiple-line Strings

No need to say more, a major pain is relieved:

var str = raw"This is a text
                    with multiple lines.
                    Escapes are not interpreted,
                    \n is not a newline."; 

Above is my personal take on what's going to be the most major features in ES6 from a developer's point of view. ES6 is still in draft phase, and won't be completed until late 2014 according to the timeline. Its specification can and will change, but the preliminary features are very much worth looking forward to. For a complete list of new features, please refer to Dr. Rauschmayer's presentation.



Subscribe to ryandao RSS