Debug nodejs v0.10 with 100% cpu usage

Last week a worker at my job written in Nodejs v0.10 started to fail. At some point the worker would consume 100% of CPU and gradually would increase the memory usage as well. That job was been forced to stop once all the memory available was consumed.

Basically I had two approaches available to find out what was causing the problem.

  1. Put console.log every place and find where in my code was the last time executed before getting stuck with 100% of CPU usage.
  2. Do it right and profile the application to find out properly based in statistics about the running process how much CPU each part was consuming.

Even console.log been a common approach for many simple cases. The profile is the best option for this kind of problem.

So this is the flow I followed to find out the problem:

  1. Start the app with enabled profile node --prof index.js.
  2. In other shell window run top to monitor when the process get 100% CPU usage.
  3. Once the process get in high CPU usage stop it.
  4. The profile will have created v8.log file.
  5. In node v0.10 has not a built in tool to process this file. Because of that install one with npm -g install tick.
  6. Execute node-tick-processor and it will print the explained results about the profile log v8-profile-cpu-100-usage.txt.

Looking the [Bottom up (heavy) profile] section I could find out the module causing the 100% CPU usage.

[Bottom up (heavy) profile]:
Note: percentage shows a share of a particular caller in the total
amount of its parent calls.
Callers occupying less than 2.0% are not shown.

ticks parent  name
15924   71.2%  /opt/node-v0.10.31-linux-x64/bin/node
7742   48.6%    Function: ~derez /src/node_modules/cycle/cycle.js:46
7739  100.0%      Function: ~derez /src/node_modules/cycle/cycle.js:46
7739  100.0%        Function: ~derez /src/node_modules/cycle/cycle.js:46
7739  100.0%          Function: ~derez /src/node_modules/cycle/cycle.js:46
7739  100.0%            Function: ~derez /src/node_modules/cycle/cycle.js:46

As we can see the cycle is the guilty one. Cycle is a module used to encode and decode JSON data to Javascript. So we know already probably is a JS object been parsed to JSON string. And I would guess this JS object has circular reference and that is why is getting 100% CPU usage without stop until consume all the memory available.

But that was part of the problem. We know who is the responsible. But who is calling that function?

Looking into the [Top down (heavy) profile] I find out that winston module is the one using the cycle. Winston is a library for logging. So are suspicion was right. Probably is a JS object been saved to the log.

[Top down (heavy) profile]:
Note: callees occupying less than 0.1% are not shown.

inclusive      self           name
ticks   total  ticks   total
18799   84.0%      0    0.0%  LazyCompile: ~processImmediate timers.js:335
18767   83.9%      0    0.0%    LazyCompile: ~lib$rsvp$asap$$flush /src/node_modules/rsvp/dist/rsvp.js:1193
18747   83.8%      0    0.0%      LazyCompile: ~lib$rsvp$$internal$$publishRejection /src/node_modules/rsvp/dist/rsvp.js:957
18747   83.8%      0    0.0%        LazyCompile: ~lib$rsvp$$internal$$publish /src/node_modules/rsvp/dist/rsvp.js:1002
18747   83.8%      0    0.0%          LazyCompile: ~lib$rsvp$$internal$$invokeCallback /src/node_modules/rsvp/dist/rsvp.js:1043
18747   83.8%      0    0.0%            LazyCompile: lib$rsvp$$internal$$tryCatch /src/node_modules/rsvp/dist/rsvp.js:1034
18746   83.8%      0    0.0%              LazyCompile: ~<anonymous> /src/lib/promised-queue.js:29
18736   83.8%      0    0.0%                LazyCompile: ~logError /src/lib/promised-queue.js:80
18736   83.8%      0    0.0%                  LazyCompile: ~target.(anonymous function) /src/node_modules/winston/lib/winston/common.js:41
18736   83.8%      0    0.0%                    LazyCompile: ~Logger.log /src/node_modules/winston/lib/winston/logger.js:123
18736   83.8%      0    0.0%                      LazyCompile: ~async.each /src/node_modules/winston/node_modules/async/lib/async.js:104
18736   83.8%      0    0.0%                        LazyCompile: ~_each /src/node_modules/winston/node_modules/async/lib/async.js:30
18736   83.8%      0    0.0%                          LazyCompile: *forEach native array.js:1087
18736   83.8%      0    0.0%                            LazyCompile: ~<anonymous> /src/node_modules/winston/node_modules/async/lib/async.js:110
18736   83.8%      0    0.0%                              LazyCompile: ~emit /src/node_modules/winston/lib/winston/logger.js:175
18736   83.8%      0    0.0%                                LazyCompile: ~Console.log /src/node_modules/winston/lib/winston/transports/console.js:56
18736   83.8%      0    0.0%                                  LazyCompile: ~exports.log /src/node_modules/winston/lib/winston/common.js:112
18736   83.8%      0    0.0%                                    LazyCompile: ~decycle /src/node_modules/cycle/cycle.js:24
18736   83.8%      0    0.0%                                      Function: ~derez /src/node_modules/cycle/cycle.js:46
18736   83.8%      0    0.0%                                        Function: ~derez /src/node_modules/cycle/cycle.js:46
18736   83.8%      0    0.0%                                          Function: ~derez /src/node_modules/cycle/cycle.js:46
18736   83.8%      0    0.0%                                            Function: ~derez /src/node_modules/cycle/cycle.js:46
18736   83.8%      0    0.0%                                              Function: ~derez /src/node_modules/cycle/cycle.js:46
18736   83.8%      0    0.0%                                                Function: ~derez /src/node_modules/cycle/cycle.js:46
18736   83.8%      0    0.0%                                                  Function: ~derez /src/node_modules/cycle/cycle.js:46
18551   82.9%    998    4.5%                                                    Function: ~derez /src/node_modules/cycle/cycle.js:46
4754   21.3%   4754   21.3%                                                      /opt/node-v0.10.31-linux-x64/bin/node
3805   17.0%    296    1.3%                                                      Function: ~derez /src/node_modules/cycle/cycle.js:46

Looking above the winston tick the last part of my code before calling the winston lib is:

LazyCompile: ~logError /src/lib/promised-queue.js:80

And that is the end. Looking into my code /src/lib/promised-queue.js at line 80. The function logError really was calling the winston library. Based on this knowledge my approach was simplest replacing winston with console.log because at that part a simple print of the error into the console was enough.

Applying a VIM macro to a set of lines

  1. Start a macro typing q + macro name (in that case was q as well)
  2. Record the commands. I was using ds' which uses the surround plugin to delete the quotes.
  3. Finish the recording with q again.
  4. Select the lines with V
  5. Apply the macro to the select lines with :norm! @q



Suspend vagrant before Mac sleep

Lately I was getting a lot of problems with my VM and even with the bluetooth on Mac. Just because the vmware VM was not recovering well after my macbook waking up.

I use a VM everyday and I always forget to suspend/halt that before sleep my macbook. So I find out is possible to configure scripts to be executed on wakeup/sleep the machine through SleepWatcher.

This blog post Mac OS X: Automating Tasks on Sleep by Kodiak explain really well how to setup the sleepwatcher.


# 1. install
brew install sleepwatcher

# 2. create a script "~/.sleep" to be executed before the machine sleep
# in my case was "cd <vm-directory> && vagrant suspend"

# 3. set full permissions to the owner
chmod 700 ~/.sleep

# 4. start the watcher then put your machine to sleep to test the script is working
/usr/local/sbin/sleepwatcher --verbose --sleep ~/.sleep

# configure the deamon
sudo chown root:wheel /usr/local/Cellar/sleepwatcher/2.2/de.bernhard-baehr.sleepwatcher-20compatibility-localuser.plist

ln -sfv /usr/local/Cellar/sleepwatcher/2.2/de.bernhard-baehr.sleepwatcher-20compatibility-localuser.plist ~/Library/LaunchAgents/

launchctl load ~/Library/LaunchAgents/de.bernhard-baehr.sleepwatcher-20compatibility-localuser.plist

Knextancy - Node JS Multi Tenancy

At Bravi every project has multi tenant support since the beginning of the development. This because we have built a solution that is so easy to use that is almost imperceptible we are developing a multi tenant application. The only thing we have to care about is:

  • pass ahead in the HTTP header the tenant id for every consumed API/application
  • use a prefix in the table on executing SQL queries select * from "$_mytable"

And that is it. You are already building a multi tenant application.

How does that work?

Well. The library knextancy takes care of everything for us.

Knextancy is built on top of knex and implements multi tenant using table prefixes. So all the clients share the same database but each one has it own table. For example:

Every time a request like this one below is done:

curl -X GET -H "x-client-id: 11" "http://localhost:3000/"

Knextancy will identify the tenant associated to that request based in the x-client-id. In case is the first request for that server, knextancy also will execute the migration and seed tasks. So there is no manual configuration required for bringing up a database for a new client.

You may ask yourself. How is that secure if the tenant is based in HTTP header? Then the user could change manually the tenant he is accessing.

Well. This is not a problem for services running inside a private network. For the public app/services, we run everything behind a proxy, that injects the x-client-id based in the domain.

After that knextancy injects in the request an instance of knex with the context of the current client (e.g. req.knex).

Then to execute any SQL query with multi tenant support is just required to use the req.knex and any table must have prefix $_ to allow knextancy resolve the table for the current client.

Simple example of how to build a multi tenant REST API. Using knextancy with express and PostgreSQL database.

Understanding a Phoenix application and it's lifecycle

It is a post for beginners in Elixir and Phoenix. Has intention to help understanding how a Phoenix application works and a bit of its lifecycle.

I started the day thinking in write an chat application to apply what I have learned so far of Elixir. But even after reading the whole Elixir - Getting Started and Phoenix - Up And Running. I was not really feeling prepared to write an application on my own.

P.S. I didn’t read the Phoenix framework documentation before start the application. I’m sure if I had read the docs, everything would makes sense. But I was just too excited to start coding :D.

So instead of writing a chat app from scratch. What we gonna do is understand how the chat app built by Chris McCord works.

Before understanding, lets see the application running:

  • Clone the repo chrismccord/phoenix_chat_example, then cd to the new directory
  • Install dependencies with mix deps.get
  • (optional) Install npm dependencies to customize the ES6 js/Sass npm install
  • Start Phoenix router with mix phoenix.server

Now you can visit localhost:4000 from your browser.


Contains the definition for all dependencies and configures the OTP application.

Check out elixir/Application to understand what is an OTP application. And the presentation OTP Has Done It - by Nick DeMonner also gives an idea about it.

defmodule Chat.Mixfile do
  use Mix.Project

  def project do
    [app: :chat,
     version: "0.0.1",
     elixir: "~> 1.0",
     elixirc_paths: ["lib", "web"],
     compilers: [:phoenix] ++ Mix.compilers,
     deps: deps]

  # Configuration for the OTP application
  # Type `mix help` for more information
  def application do
    [mod: {Chat, []},
     applications: [:phoenix, :phoenix_html, :cowboy, :logger]]

  # Specifies your project dependencies
  # Type `mix help deps` for examples and options
  defp deps do
    [{:phoenix, "~> 1.0.0"},
     {:phoenix_html, "~> 2.1"},
     {:phoenix_live_reload, "~> 1.0", only: :dev},
     {:phoenix_ecto, "~> 1.1"},
     {:postgrex, ">= 0.0.0"},
     {:cowboy, "~> 1.0"}]

In the application function is defined the Chat as the startup module. And also is defined all applications your application depends on at runtime.


defmodule Chat do
  use Application

  # See
  # for more information on OTP Applications
  def start(_type, _args) do
    import Supervisor.Spec, warn: false

    children = [
      # Start the endpoint when the application starts
      supervisor(Chat.Endpoint, []),
      # Start the Ecto repository
      worker(Chat.Repo, []),
      # Here you could define other workers and supervisors as children
      # worker(Chat.Worker, [arg1, arg2, arg3]),

    # See
    # for other strategies and supported options
    opts = [strategy: :one_for_one, name: Chat.Supervisor]
    Supervisor.start_link(children, opts)

  # Tell Phoenix to update the endpoint configuration
  # whenever the application is updated.
  def config_change(changed, _new, removed) do
    Chat.Endpoint.config_change(changed, removed)

In lib/chat.ex is defined the OTP Application. As we can see the Chat.Endpoint is started as supervisor. Which will start the endpoint when the application starts and it will be restarted in case it crashes.

And Chat.Repo is started as worker. Which will run the repository in a different process. Allowing this way to keep the state (e.g. connection pool) between different requests. Otherwise would be necessary establish a new DB connection for every request.


defmodule Chat.Endpoint do
  use Phoenix.Endpoint, otp_app: :chat

  # Requests coming at "/socket" path will be handled by
  # UserSocket (web/channels/user_socket.ex)
  socket "/socket", Chat.UserSocket

  # Serve at "/" the given assets from "priv/static" directory
  plug Plug.Static,
    at: "/", from: :chat,
    only: ~w(css images js favicon.ico robots.txt)

  # Code reloading will only work if the :code_reloader key of
  # the :phoenix application is set to true in your config file.
  if code_reloading? do
    socket "/phoenix/live_reload/socket", Phoenix.LiveReloader.Socket
    plug Phoenix.CodeReloader
    plug Phoenix.LiveReloader

  # Log the requests
  plug Plug.Logger

  # Configure parsers
  plug Plug.Parsers,
    parsers: [:urlencoded, :multipart, :json],
    pass: ["*/*"],
    json_decoder: Poison

  plug Plug.MethodOverride
  plug Plug.Head

  plug Plug.Session,
    store: :cookie,
    key: "_chat_key",
    signing_salt: "LH6XmqGb",
    encryption_salt: "CIPZg4Qo"

  # Only after passing through all the previous Plug
  # the request will be handled by the Chat.Router (web/router.ex)
  plug Chat.Router

In lib/chat/endpoint.ex is used a lot Plug. It allows compose modules between web applications. With Plug is possible to change the request and response data through the connection lifecycle. It is comparable to a middleware in Node JS.

Check out understanding-plug and plug project.


defmodule Chat.Router do
  use Phoenix.Router

  pipeline :browser do
    plug :accepts, ["html"]
    plug :fetch_session
    plug :fetch_flash
    plug :protect_from_forgery

  pipeline :api do
    plug :accepts, ["json"]

  scope "/", Chat do
    pipe_through :browser # Use the default browser stack

    get "/", PageController, :index

In Chat.Router we see the definition of pipeline. Which is a simple way to pipe a series of plug before passing the request ahead to a controller. That can be used for different type of requests. For example: an API request must be handled differently of a browser (page) request.

Once a request arrives at the Phoenix router, it performs a series of transformations through pipelines until the request is dispatched to a desired end-point.

Such transformations are defined via plugs, as defined in the Plug specification. Once a pipeline is defined, it can be piped through per scope.


defmodule Chat.PageController do
  use Chat.Web, :controller

  def index(conn, _params) do
    render conn, "index.html"

After a request passing through all previous plug. It will be handled by the controller. For instance a GET / will respond with index.html page.


As we saw before in the Chat.Endpoint the socket connections will be handled by the Chat.UserSocket.

defmodule Chat.UserSocket do
  use Phoenix.Socket

  channel "rooms:*", Chat.RoomChannel

  transport :websocket, Phoenix.Transports.WebSocket
  transport :longpoll, Phoenix.Transports.LongPoll

  def connect(_params, socket) do
    {:ok, socket}

  def id(_socket), do: nil

Basically the Chat.UserSocket creates a channel for topics matching rooms:*. And add support for both web socket and log pool connections.


defmodule Chat.RoomChannel do
  use Phoenix.Channel
  require Logger

  @doc """
  Authorize socket to subscribe and broadcast events on this channel & topic

  Possible Return Values

  `{:ok, socket}` to authorize subscription for channel for requested topic

  `:ignore` to deny subscription/broadcast on this channel
  for the requested topic
  def join("rooms:lobby", message, socket) do
    # Exit signals arriving to a process are converted to {'EXIT', From, Reason} messages,
    # which can be received as ordinary messages
    Process.flag(:trap_exit, true)
    :timer.send_interval(5000, :ping)
    send(self, {:after_join, message})

    {:ok, socket}

  def join("rooms:" <> _private_subtopic, _message, _socket) do
    {:error, %{reason: "unauthorized"}}

  def handle_info({:after_join, msg}, socket) do
    broadcast! socket, "user:entered", %{user: msg["user"]}
    push socket, "join", %{status: "connected"}
    {:noreply, socket}
  def handle_info(:ping, socket) do
    push socket, "new:msg", %{user: "SYSTEM", body: "ping"}
    {:noreply, socket}

  def terminate(reason, _socket) do
    Logger.debug"> leave #{inspect reason}"

  def handle_in("new:msg", msg, socket) do
    broadcast! socket, "new:msg", %{user: msg["user"], body: msg["body"]}
    {:reply, {:ok, %{msg: msg["body"]}}, assign(socket, :user, msg["user"])}

Pretty simple, it handles:

  • new users join the channel
  • broadcast new users in the chat
  • send ping messages
  • broadcast a user’s message

Check out Phoenix.Channel for a further explanation.

The rest is just a HTML page loading a CSS script (source in Sass) and a Javascript script (source in ES6) which consumes the socket provided by the Chat channel.

P.S. I have just started learning Elixir and Phoenix. Let me know if I had misunderstood something.