Node on Ubuntu on Vagrant on Hyper-V

The following is how I got NodeJS running on Ubuntu using Vagrant in Windows with Hyper-V.

Why Hyper-V?

Paying for VMWare goes against my open-source leanings, and I'm a lone dev so any expenses have to be seriously considered.

Virtualbox is a great product and I've used it a lot in the past but I never liked the networking stuff it added to my system. I've tried using Vagrant with Virtualbox on Windows before and always come against folder and file permissions and syncing problems. I have problems with Oracle too.

Hyper-V on the other hand is baked natively in to Windows 8.1 and higher (I'm using Windows 10). It's straightforward to enable, configure and plays nicely with the Windows filesystem and networking stack.

Why Ubuntu?

I've used it more than any other Linux distribution. I'm very familiar with configuring it and running it in production. I dare say the steps I outline below are fairly 'box' agnostic though.

Step 1. Install Vagrant

Head on over to https://www.vagrantup.com/downloads.html to download the installer, run it and wait for it to finish.

Step 2. Enable Hyper-V

Go to Uninstall or change a program, you can find this in the toolbar on This PC, via the control panel, or just search for it.

Next click on Turn Windows features on or off on the left side of the screen. And make sure Hyper-V is checked.

Enabled Hyper-V

You might have to reboot to fully enable Hyper-V but once it's enabled you can check it under the performance tab in the task scheduler

Check Hyper-V is enabled

Step 3. Create Hyper-V network switch

This step is really important. If you do not do this, Vagrant will not be able to connect to the box. So hit your Start button and search for Hyper-V Manager. Once in, find Virtual Switch Manager...

Hyper-V Virtual Switch Manager link

In the 'Virtual Switch Manager' select New virtual network switch, now you have three choices:

New virtual network switch

External creates a network switch that is connected to your system's physical network. This will allow your Vagrant box to connect to the outside world, and vice versa.

Internal creates a network switch that allows your host system and the virtual machines in Hyper-V to talk to each other. If you select this option, your Vagrant will not have internet access.

Private creates a network switch that can only be used by the virtual machines. This is useless for Vagrant.

I suggest using External as it means I can use apt-get etc. So select External and hit Create Virtual Switch. All you need do now is give your virtual switch a name. Hit 'OK' and close the Hyper-V Manager.

Step 4. The Vagrantfile

Now we have the host operating system set up and Vagrant installed it's time to actually create a Vagrant box.

In the directory your project with be in type the following command:

vagrant init  

This will create a single file called Vagrantfile in your directory. This file is all you need and is where you'll put your instructions for setting up your Vagrant box.

Ignoring all the comments and remmed out statements, the basic Vagrantfile looks like this:

Vagrant.configure(2) do |config|  
  config.vm.box = "base"
end  

That's it. The base box is the default and is all well and good, but I want 64bit Ubuntu. So change "base" to "hashicorp/precise64". If you want a different base system, you can find more pre-built boxes at https://atlas.hashicorp.com/boxes/search

Next we have to tell Vagrant to use Hyper-V as I think it defaults to 'Virtualbox', so add the following line:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
end  

Finally, we want to make sure the Vagrant box has access to the public network (the internet) so we can grab apt packages and the like. So add the following line:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
end  

This is the absolute basics we need. Save your Vagrantfile and we're ready to fire it up.

Step 5. vagrant up

One quirk of Vagrant on Hyper-V is that it must be run as Administrator. So whether you're using Command Prompt, Powershell, Cygwin, or Git Bash, you need to make sure you run it as Administrator.

So to get things going open your CLI window, navigate to your project folder and type:

vagrant up  

If you see an error regarding the provider, you may need to force the use of Hyper-V:

vagrant up --provider=hyperv  

If all goes well, you should see something along the lines of the following. It might take a while as it has to download the virtual hard drive for the Ubuntu version we selected. Also, because we're using Linux with Windows, Vagrant needs to set up a Samba share. So you'll need to enter your Windows credentials.

$ vagrant up
Bringing machine 'default' up with 'hyperv' provider...  
==> default: Verifying Hyper-V is enabled...
==> default: Importing a Hyper-V instance
    default: Cloning virtual hard drive...
    default: Creating and registering the VM...
    default: Successfully imported a VM with name: precise64
==> default: Starting the machine...
==> default: Waiting for the machine to report its IP address...
    default: Timeout: 120 seconds
    default: IP: 192.168.1.174
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.1.174:22
    default: SSH username: vagrant
    default: SSH auth method: password
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Preparing SMB shared folders...
    default: You will be asked for the username and password to use for the SMB
    default: folders shortly. Please use the proper username/password of your
    default: Windows account.
    default:
    default: Username: xxxxx
    default: Password (will be hidden):
    default: Password (will be hidden): xxxxxxxx
==> default: Mounting SMB shared folders...
    default: C:/Users/lewis/Desktop/vagrant test => /vagrant

If you see something like above then everything ran fine. You can SSH in to your box by typing vagrant ssh

Some useful Vagrant commands are:

vagrant upCreate a box or start from halt
vagrant haltGraceful shutdown
vagrant destroyRemove the box
vagrant suspendPause box at exact state
vagrant resumeResume from suspend
vagrant reloadReboot, maybe after a config change
vagrant provisionRe-run the provisioning stuff
vagrant sshSSH in to your box

More can be found on the Vagrant website

Step 6. Provisioning and NodeJS

Every time you create a box from a Vagrantfile or vagrant up after a vagrant destroy, Vagrant will create your box from scratch. While we could install all the software we need each time, it makes sense to tell Vagrant to do it for us. This is known as provisioning.

To get started with provisioning NodeJS, create a new file called bootstrap.sh. This is a bash script where we'll put in the commands we need to run. I'm going to install NVM, the node version manager. This is because NodeJS release new versions very fast so I'm happy to actually select my NodeJS version manually.

#!/usr/bin/env bash

wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash  

Now in your Vagrantfile add the following line. The important part here is at the end, priveleged: false. By default provisioning scripts run as sudo but we want NVM installed as the vagrant user.

config.vm.provision :shell, path: "bootstrap.sh", privileged: false  

So now your whole Vagrantfile should look like this:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
  config.vm.provision :shell, path: "bootstrap.sh", privileged: false
end  

In your bootstrap.sh you would also put anything else you want run automatically. Such as setting environment variables, pulling from a remote repository, or even installing a database system like Redis.

All done

This is just basic steps to get NodeJS running in Ubuntu with Vagrant using Hyper-V on Windows. There is a lot more to Vagrant than I can cover here, so I suggest you start with their Getting Started guide.

A Node crisis of confidence

After over ten years of PHP, I've spent the last couple of years almost exclusively developing using Node.JS. I consider myself pretty proficient. Happy with prototypal inheritance, Promises, and much of the new ES6 stuff.

However, today I started to get the feeling that I'd been doing something wrong for a long time. It concerns the sharing of resources across modules.

The question being, should a shared resource be required once and passed to modules, or can we rely on Node's caching (and pseudo-singleton implementation) to prevent duplicating resources. Consider the following two approaches:

One

// app.js
require('db');  
require('module_one')(db);  
require('module_two')(db);

// module_one.js
module.exports = function(db){  
  return {
    func_one : function(){
      ...
    }
  }
}

// module_two.js
module.exports = function(db){  
  return {
    func_two : function(){
      ...
    }
  }
}

Two

// app.js
require('module_one');  
require('module_two');

// module_one.js
require('db');  
module.exports = {  
  func_one : function(){
    ...
  }
}

// module_two.js
require('db');  
module.exports = {  
  func_two : function(){
    ...
  }
}

In the first approach, only one instance of db is ever created and used. This is explicitly clear. Though it does introduce dependency injection. The module itself cannot stand alone.

In the second approach, the db module is loaded twice, though because it's cached by Node, the second time gets the same instance as the first time. But the node developers acknowledge this is not guaranteed. If this fails, memory consumption could shoot up and database connections could grind to a halt.

I much prefer the second approach, the one I've used for two years without incident. But now I'm concerned it may be the wrong approach for larger projects.

If you've any thoughts, contact me on Twitter.

Update

After reading Liam Kaufman's article, How AngularJS Made Me a Better Node.js Developer and this thread on Stack Overflow I've come to the conclusion that the answer to the above is, it depends. Both approaches are equally valid and have different use-cases. I'd still be keen to see if there is any performance benefit to one approach over the other.

Persistent NodeJS apps on restart

As far as I'm concerned the only two decent solutions to keeping NodeJS apps running properly are PM2 and Forever. The each have their strengths and weaknesses which is a matter for another post.

I recently found myself in a position where I had to use both. I had an application I'd written that I wanted to run in Node 4.x.x and a Ghost install that insists on Node 0.10.40. I use NVM to manage different versions on the same machine and found it best to use PM2 for one app and Forever for the other. Not very elegant, but it works and is holding up.

After running for a few days I patched the server OS and it dawned on me that if I restarted the server, I'd have to manually restart both Forever and PM2. So I present the method I used to have my apps launch properly using SystemV init system. This should be fairly easy to port to other init systems.

First thing to do is create new startup and shutdown scripts:

touch /var/www/startup.sh  
touch /var/www/shutdown.sh  

The first line in my startup.sh 'sources' the nvm script, otherwise bash complains it can't find it. So here is the startup script:

. ~/.nvm/nvm.sh
export NODE_ENV=whatever_this_should_be  
nvm use 4.2.1  
cd /var/www/webapp-one/  
pm2 start index.js  
nvm use 0.10.40  
cd /var/www/webapp-two/  
forever start index.js  

Hopefully the above is self-explanatory, it's just a series of commands to switch to the right Node install and get things up and running.

It's also important to have an elegant shutdown script, so here's mine:

. ~/.nvm/nvm.sh
nvm use 4.2.1  
cd /var/www/webapp-one/  
pm2 kill  
nvm use 0.10.40  
cd /var/www/webapp-two/  
forever stopall  

As before, we still need to 'source' nvm, then again it's just a series of commands to elegant shutdown.

Make both of these files executable:

chmod +x /var/www/startup.sh  
chmod +x /var/www/shutdown.sh  

It's a good idea here to test they work, hopefully you'll know by looking at the output:

cd /var/www  
./startup.sh
./shutdown.sh

Now we can get init.d to manage the startup and shutdown for us. So in the /etc/init.d directory create a new file:

sudo touch /etc/init.d/myapps  

Init scripts follow a certain format, the following is for SystemV:

#! /bin/sh
WWW_DIR=/var/www/  
case "$1" in  
  start)
    su myuser -c $WWW_DIR/startup.sh
    ;;
  stop)
    su myuser -c $WWW_DIR/shutdown.sh
    sleep 3
    ;;
  restart)
    su myuser -c $WWW_DIR/shutdown.sh
    sleep 3
    su myuser -c $WWW_DIR/startup.sh
    ;;
  *)
    echo "Usage: websites {start|stop|restart}" >&2
    exit 3
    ;;
esac  

In the above file it should obvious what each section does. Pre-pending our bash scripts with su myuser -c tells the script to run the command as a certain user, you of course will pick whatever your username is. sleep simply pauses the script just to let things settle.

Once you've saved it, make it executable for all users, and test it out:

sudo chmod a+x /etc/init.d/myapps

cd /etc/init.d  
sudo ./myapps start # to test startup  
sudo ./myapps stop  # to test shutdown  
sudo ./myapps restart # to test restart  

If all is good you can update SystemV to run at system startup and shutdown:

sudo update-rc.d myapps defaults  

Now you can manage your apps like any other service.

sudo service myapps start  
sudo service myapps stop  
sudo service myapps restart  

Personally I prefer to have one service to handle everything. If I need to manage an individual app I'll do that manually, but you could create separate init.d scripts for all your separate apps.