Although asynchronicity is not a new concept in JavaScript, it still can cause problems, especially with every new feature introduced to it. The more we dig into the origins and basics of JavaScript, the more we can get lost and feel insecure about our knowledge on this subject because things can get a little bit tricky here.  

Single-threaded model 

As a single-threaded language, JavaScript has one thread that can be used to execute everything. It seems reasonable then to ask how it can be asynchronous if there is only one thread to use. Wrong usage could lead to the entire UI being stuck because JavaScript runs on the browser’s main thread by default. Several things are key to understand here: call stack, memory heap, web API, task queue, and event loop

Call stack – keeping track of function calls

To keep track of multiple function calls which happened in the program, Javascript uses a call stack. The nomenclature here isn’t coincidental since,  like the stack we know from data structures, this one behaves in the same manner. Using the Last In First Out (LIFO) principle, Javascript manages the execution order, so when a function is called, the javascript engine constructs a context for the function execution, places it at the top of the call stack, and begins the function’s execution. 

Code example: 

function greeting() {
    // [1] Some code here
    sayHi();
    // [2] Some code here
}

function sayHi() {
    return "Hi!";
}

// Invoke the `greeting` function
greeting();

// [3] Some code here

source [https://developer.mozilla.org/en-US/docs/Glossary/Call_stack]

Now let’s see what exactly happened here. 

  1. Ignore all functions until it reaches the greeting() function invocation. 
  2. Add the greeting() function to the call stack list.  
  3. Execute all lines of code inside the greeting() function. 
  4. Get to the sayHi() function invocation. 
  5. Add the sayHi() function to the call stack list.  
  6. Execute all lines of code inside the sayHi() function until it reaches its end. 
  7. Return execution to the line that invoked sayHi() and continue executing the rest of the greeting() function. 
  8. Delete the sayHi() function from our call stack list. 
  9. When everything inside the greeting() function has been executed, return to its invoking line to continue executing the rest of the JS code. 
  10. Delete the greeting() function from the call stack list. 

The whole process can be visualized like this. 

Memory heap – object allocation

Heap memory are memory areas allocated to each program that can be dynamically allocated, unlike memory allocated to stacks. We can imagine that heap memory is a box with all our objects. This box doesn’t allocate a fixed amount of memory for each element, but allocates more space as required. That’s why the size is known at run time and not at compile time, like in stack.

Of course, as a memory resource, the memory heap is part of garbage collection, which raises the problem of memory leaks. The garbage collector is looking for elements without any references to them so they can be removed. Let’s look at the example code:

let person = {
    name: 'Mary',
    age: 32,
};

let employee = {
    id: 234,
};

person.employee = employee;
employee.person = person;

person = null;
employee = null;

As we can see, there is a deadlock because of circular references. The memory cannot be released since the elements refer to each other, preventing the garbage collector from properly removing them from the heap. That’s why you should avoid circular references – these objects will never be deallocated.

 This problem could be also visualized like this:

Web API 

Now, when we have a call stack and memory heap on our side, we need only a few things more. One of these is Web API. These are additional interfaces which are provided by the browser itself, for example: timing events (setTimeout), requests sending methods (fetch), and DOM-manipulation methods. Thanks to the event loop, we can use them asynchronously from the JavaScript level without clogging JavaScript’s single thread. 

Task queue(s) – micro and macro

When our request is ready, the callback to asynchronous web API methods is put into a queue. It won’t let any callback be handled unless the callback stack is completely empty. The queue works in the FIFO principle, so the first element in it will be the first element handled. This queue is also called macrotask queue and handles all events, setTimeout calls, and executes a loaded JavaScript code.

Due to JavaScript spec, there is another queue to handle smaller updates. This queue is called microtask queue and manages Promises callbacks before the UI is rendered.

The rule of thumb is that for each macrotask, e.g. handling a click event, the handler gets called, then all the microtasks are being run, and after that the DOM is rerendered. Only then another task from the macrotask queue can be processed.

More on the microtask queue 

How does that work in practice? Let’s look at a Promise.

ES6 introduced us to Promises. If you’d like to read more about them, here’s Aneta Chwała’s article explaining the concept and it’s applicability:

Callbacks vs Promises in JavaScript

In JavaScript, callbacks and promises are two ways to handle asynchronous execution. Which one is better?

With introduction of Promises, came a separate, dedicated queue, existing inside the main task queue – a microtask queue (specified in ECMA standard as PromiseJobs).  

Let’s take a look at an example: 

let promise = Promise.resolve();

// this alert shows third
setTimeout(() => alert('setTimeout'), 0); 

// this alert shows second
promise.then(() => alert('Promise done!')); 

// this alert shows first
alert('Finished');

When we run it, we can see the Finished first and then the Promise done! But how is it possible if our promise was resolved from the very beginning?  

Promises have their own handlers (.then, .catch, .finally) and every one of them is always asynchronous, so we need to take proper care of them. And for proper management we use a microtask queue. Our internal queue also works in the FIFO principle and can initiate the execution of the task when nothing else is running. And this is exactly why we got our order of execution in this particular order. 

So, in conclusion, we can have multiple microtasks during a single macrotask, which can put other microtasks in the microtask queue and once all the microtasks have been processed, the macrotask can be finished.

Event loop ties everything together

The last part in this process is an event loop which makes everything going. After we have emptied the callback stack, the queue takes its first element and puts it into the callback stack. Then, the stack is running, putting another thing into the queue thanks to the web API and when it’s empty, another element from the queue is taken and so on. 

To make it more clear, let’s introduce an example with all elements we have to this moment to help us visualize the whole process:

let person = {
    name: 'Mary',
    age: 32,
};

let employee = {
    id: 234,
};

function SayHi() {
    return 'Hi!';
}

const greeting = () => {
  SayHi();

  fetch("https://exampleUrlPost.com", {
        method: 'POST',
        body: JSON.stringify({person, employee})
  })
  .then(response => {
	if(response.status === 200) {
            return response.json();
	} else {
            throw new HttpError(response);
	}
  });

  alert("I'm first!");
}

greeting();

There are our two objects, person and employee, but this time there is no circular reference between them, and also two functions from the beginning: SayHi and greeting but a little changed. As we can see, SayHi is a normal function that returns a string  and greeting is a POST sending a request with our two objects and a SayHi function inside. 

Once the greeting function is called, the cycle begins. With the knowledge about the callstack, we know that the SayHi function is going to be at the top of it, and the greeting – under it. 

Then, after SayHi returns its value, it is going to be taken from the stack. Now it’s time for the greeting function to start its execution. Using fetch, provided by the Web API , we send a POST request to the chosen url with our objects from the heap. After that, the promise goes to the dedicated microtask queue while we see the ,,I’m first!” alert.  When the promise is resolved and the microtask queue is clear, we get our data from the fetch and the queue, now empty, is looking on callstack for another function to execute, beginning a new round of event loop.

And thanks to this mechanism, we can manage asynchronous requests with only a single thread available. 

And if you need a development team that values sharing knowledge…

Let’s talk!
Zofia Dobrowolska

Frontend developer at Makimo. Passionate about coding, inline skating and variety of handicrafts.