Node on Ubuntu on Vagrant on Hyper-V

The following is how I got NodeJS running on Ubuntu using Vagrant in Windows with Hyper-V.

Why Hyper-V?

Paying for VMWare goes against my open-source leanings, and I'm a lone dev so any expenses have to be seriously considered.

Virtualbox is a great product and I've used it a lot in the past but I never liked the networking stuff it added to my system. I've tried using Vagrant with Virtualbox on Windows before and always come against folder and file permissions and syncing problems. I have problems with Oracle too.

Hyper-V on the other hand is baked natively in to Windows 8.1 and higher (I'm using Windows 10). It's straightforward to enable, configure and plays nicely with the Windows filesystem and networking stack.

Why Ubuntu?

I've used it more than any other Linux distribution. I'm very familiar with configuring it and running it in production. I dare say the steps I outline below are fairly 'box' agnostic though.

Step 1. Install Vagrant

Head on over to https://www.vagrantup.com/downloads.html to download the installer, run it and wait for it to finish.

Step 2. Enable Hyper-V

Go to Uninstall or change a program, you can find this in the toolbar on This PC, via the control panel, or just search for it.

Next click on Turn Windows features on or off on the left side of the screen. And make sure Hyper-V is checked.

Enabled Hyper-V

You might have to reboot to fully enable Hyper-V but once it's enabled you can check it under the performance tab in the task scheduler

Check Hyper-V is enabled

Step 3. Create Hyper-V network switch

This step is really important. If you do not do this, Vagrant will not be able to connect to the box. So hit your Start button and search for Hyper-V Manager. Once in, find Virtual Switch Manager...

Hyper-V Virtual Switch Manager link

In the 'Virtual Switch Manager' select New virtual network switch, now you have three choices:

New virtual network switch

External creates a network switch that is connected to your system's physical network. This will allow your Vagrant box to connect to the outside world, and vice versa.

Internal creates a network switch that allows your host system and the virtual machines in Hyper-V to talk to each other. If you select this option, your Vagrant will not have internet access.

Private creates a network switch that can only be used by the virtual machines. This is useless for Vagrant.

I suggest using External as it means I can use apt-get etc. So select External and hit Create Virtual Switch. All you need do now is give your virtual switch a name. Hit 'OK' and close the Hyper-V Manager.

Step 4. The Vagrantfile

Now we have the host operating system set up and Vagrant installed it's time to actually create a Vagrant box.

In the directory your project with be in type the following command:

vagrant init  

This will create a single file called Vagrantfile in your directory. This file is all you need and is where you'll put your instructions for setting up your Vagrant box.

Ignoring all the comments and remmed out statements, the basic Vagrantfile looks like this:

Vagrant.configure(2) do |config|  
  config.vm.box = "base"
end  

That's it. The base box is the default and is all well and good, but I want 64bit Ubuntu. So change "base" to "hashicorp/precise64". If you want a different base system, you can find more pre-built boxes at https://atlas.hashicorp.com/boxes/search

Next we have to tell Vagrant to use Hyper-V as I think it defaults to 'Virtualbox', so add the following line:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
end  

Finally, we want to make sure the Vagrant box has access to the public network (the internet) so we can grab apt packages and the like. So add the following line:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
end  

This is the absolute basics we need. Save your Vagrantfile and we're ready to fire it up.

Step 5. vagrant up

One quirk of Vagrant on Hyper-V is that it must be run as Administrator. So whether you're using Command Prompt, Powershell, Cygwin, or Git Bash, you need to make sure you run it as Administrator.

So to get things going open your CLI window, navigate to your project folder and type:

vagrant up  

If you see an error regarding the provider, you may need to force the use of Hyper-V:

vagrant up --provider=hyperv  

If all goes well, you should see something along the lines of the following. It might take a while as it has to download the virtual hard drive for the Ubuntu version we selected. Also, because we're using Linux with Windows, Vagrant needs to set up a Samba share. So you'll need to enter your Windows credentials.

$ vagrant up
Bringing machine 'default' up with 'hyperv' provider...  
==> default: Verifying Hyper-V is enabled...
==> default: Importing a Hyper-V instance
    default: Cloning virtual hard drive...
    default: Creating and registering the VM...
    default: Successfully imported a VM with name: precise64
==> default: Starting the machine...
==> default: Waiting for the machine to report its IP address...
    default: Timeout: 120 seconds
    default: IP: 192.168.1.174
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.1.174:22
    default: SSH username: vagrant
    default: SSH auth method: password
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Preparing SMB shared folders...
    default: You will be asked for the username and password to use for the SMB
    default: folders shortly. Please use the proper username/password of your
    default: Windows account.
    default:
    default: Username: xxxxx
    default: Password (will be hidden):
    default: Password (will be hidden): xxxxxxxx
==> default: Mounting SMB shared folders...
    default: C:/Users/lewis/Desktop/vagrant test => /vagrant

If you see something like above then everything ran fine. You can SSH in to your box by typing vagrant ssh

Some useful Vagrant commands are:

vagrant upCreate a box or start from halt
vagrant haltGraceful shutdown
vagrant destroyRemove the box
vagrant suspendPause box at exact state
vagrant resumeResume from suspend
vagrant reloadReboot, maybe after a config change
vagrant provisionRe-run the provisioning stuff
vagrant sshSSH in to your box

More can be found on the Vagrant website

Step 6. Provisioning and NodeJS

Every time you create a box from a Vagrantfile or vagrant up after a vagrant destroy, Vagrant will create your box from scratch. While we could install all the software we need each time, it makes sense to tell Vagrant to do it for us. This is known as provisioning.

To get started with provisioning NodeJS, create a new file called bootstrap.sh. This is a bash script where we'll put in the commands we need to run. I'm going to install NVM, the node version manager. This is because NodeJS release new versions very fast so I'm happy to actually select my NodeJS version manually.

#!/usr/bin/env bash

wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.31.0/install.sh | bash  

Now in your Vagrantfile add the following line. The important part here is at the end, priveleged: false. By default provisioning scripts run as sudo but we want NVM installed as the vagrant user.

config.vm.provision :shell, path: "bootstrap.sh", privileged: false  

So now your whole Vagrantfile should look like this:

Vagrant.configure(2) do |config|  
  config.vm.box = "hashicorp/precise64"
  config.vm.provider "hyperv"
  config.vm.network "public_network"
  config.vm.provision :shell, path: "bootstrap.sh", privileged: false
end  

In your bootstrap.sh you would also put anything else you want run automatically. Such as setting environment variables, pulling from a remote repository, or even installing a database system like Redis.

All done

This is just basic steps to get NodeJS running in Ubuntu with Vagrant using Hyper-V on Windows. There is a lot more to Vagrant than I can cover here, so I suggest you start with their Getting Started guide.

A Node crisis of confidence

After over ten years of PHP, I've spent the last couple of years almost exclusively developing using Node.JS. I consider myself pretty proficient. Happy with prototypal inheritance, Promises, and much of the new ES6 stuff.

However, today I started to get the feeling that I'd been doing something wrong for a long time. It concerns the sharing of resources across modules.

The question being, should a shared resource be required once and passed to modules, or can we rely on Node's caching (and pseudo-singleton implementation) to prevent duplicating resources. Consider the following two approaches:

One

// app.js
require('db');  
require('module_one')(db);  
require('module_two')(db);

// module_one.js
module.exports = function(db){  
  return {
    func_one : function(){
      ...
    }
  }
}

// module_two.js
module.exports = function(db){  
  return {
    func_two : function(){
      ...
    }
  }
}

Two

// app.js
require('module_one');  
require('module_two');

// module_one.js
require('db');  
module.exports = {  
  func_one : function(){
    ...
  }
}

// module_two.js
require('db');  
module.exports = {  
  func_two : function(){
    ...
  }
}

In the first approach, only one instance of db is ever created and used. This is explicitly clear. Though it does introduce dependency injection. The module itself cannot stand alone.

In the second approach, the db module is loaded twice, though because it's cached by Node, the second time gets the same instance as the first time. But the node developers acknowledge this is not guaranteed. If this fails, memory consumption could shoot up and database connections could grind to a halt.

I much prefer the second approach, the one I've used for two years without incident. But now I'm concerned it may be the wrong approach for larger projects.

If you've any thoughts, contact me on Twitter.

Update

After reading Liam Kaufman's article, How AngularJS Made Me a Better Node.js Developer and this thread on Stack Overflow I've come to the conclusion that the answer to the above is, it depends. Both approaches are equally valid and have different use-cases. I'd still be keen to see if there is any performance benefit to one approach over the other.

Key only SSH

To cut back on the hacking attempts and make things just that little bit more secure, it's a good idea to disable the use of passwords to login via SSH.

Of course you'll need a way to access it so make sure you're public key is in your ~/.ssh/authorized_keys file.

To disable the use of passwords with SSH edit the sshd_config config file using something like nano. You'll need to run this as sudo.

sudo nano /etc/ssh/ssh_config  

Find the following lines and change them, or add them if they're missing:

RSAAuthentication yes  
PubkeyAuthentication yes  
ChallengeResponseAuthentication no  
PasswordAuthentication no  
UsePAM no  

One caveat here. I found on Ubuntu 12.04 that when I turn off UsePAM the banner I usually see when connecting with SSH is not shown.

To fix this I uncommented and ammended the line which reads #Banner /etc/issue.net:

Banner /etc/motd  

Of course you'll need to restart sshd, depending on what service management system you use, enter the following:

sudo service ssh restart  
or  
sudo /etc/init.d/sshd restart  

Important Don't lose your private keys which match the public keys you've used, or you'll never get back in!

Holga Digital

In August last year I backed my first Kickstarter, a digital camera based on the venerated plastic Holga. After 'crowd-sourcing' which colour I should select back in December my multi-coloured little wonder was delivered yesterday.

My Holga Digital

Things I've learned

Holga Digital

The elegance of optimised syntax

The following is an article I wrote on Medium back in September 2014. I just discovered it again and liked it so I thought I'd 'reprint' it here.


While staring at some code recently (code that wasn’t working) I was struck by how my code has improved over time by simple changes of syntax. As an example I’ll use toggling the value of a boolean variable: myvar

For those that don’t know a boolean value can only be either true or false. Some languages also permit the use of 1 or 0.

So when I first started out I would have explicitly compared the value to true in an if statement. If the variable is true, set it to false. Otherwise, set it to true:

if(myvar === true){  
   myvar = false;
} else {
   myvar = true;
}

After learning that anything other than a ‘falsey’ value always equates to true I omitted the explicit comparison:

if(myvar){  
   myvar = false;
} else {
   myvar = true;
}

An embarrassingly long time went by before I discovered the ternary operator making simple conditional assigns very concise:

myvar = (myvar) ? false : true;  

I thought this was as elegant as it could get. A single line, a single declaration of the variable to be assigned and the two options separated by simple punctuation.

However, the application of simple logic and the use of the not operator (!) can make the original five line, forty-seven character if statement in to a single line, thirteen character piece of poetry:

myvar = !myvar;  

In case this doesn’t make sense to you, it sets the variable to whatever it is not. So if myvar is true it becomes not true, or false. If false it becomes not false, or true.

Each code block above does exactly the same thing and probably has similar performance impact. Yet the effect it has on code readability and source code file size are surely significant.