Setup DCD for Two Node Cluster

By | October 29, 2016


One of the most basic clusters you will create within your ProfitBricks environment will likely be a two-node cluster. In this tutorial we will take you through the process of putting the design together in the DCD and then provisioning it. The specifications we will use will be geared towards Microsoft SQL cluster; however, you should be able to adapt this architecture to suit your specific application’s needs. This tutorial will not cover installation or configuration of the software deployed into the cluster. We will cover that task in separate articles in the future.

Instance Configuration

To begin, create a new Data Center from within the DCD or re-use an existing Data Center.

Add three Assemblies from your palette to the drawing grid. These would represent two clustered instances plus a remote access server you can use to connect from the public Internet into your private network where your cluster is deployed. You want to keep your cluster private which is why you only have a single server connected to the public Internet. For outbound communication to the public Internet from servers on the private network it is recommended you deploy some type of firewall solution and use that as the gateway out to the public Internet.

We have configured our instances with the following resources:

| Setting | Value |
| ---- | ---- |
| Cores | 4 | 
| RAM in GB | 4 |

Since Microsoft requires us to license a minimum of four cores we reflect these in our settings. 4 GB in memory is also the recommended, base memory size for any SQL instance. Your software may have different hardware requirements.

We are building all instances using Windows 2012 R2.

While you can leave the availability zone to auto it is best to explicitly place each node of your cluster into a separate availability zone.

Storage Volumes

You can find a variety of recommendations on how to organize your volumes. Since this tutorial is geared towards building a basic SQL cluster we will be using three volumes with the following layout:

| Volume | Content |
| 1 | OS, binaries |
| 2 | User Databases, System Databases, tempdb  |
| 3 | Transaction Logs |

You can certainly move your system databases or tempdb to a different storage volume, but in most situation the above suffices. The reasoning around separating each component at a more granular level is to make it easier to identify performance issues, e.g. you would be able to determine if there’s an issue with the tempdb versus a user database.

In general, we use the following sizes for each storage volume:

| Volume | Size | Content |
| --- | --- | --- |
| 1 | 50 GB | OS, Binaries | 
| 2 | 100 GB | User Databases, System Databases, tempdb |
| 3 | 20 GB | Transaction Logs |

Configure Networking

There will be three LANs in this given topology.

The first is a heartbeat network between the two cluster instances. You can create a connection between the two instances in the DCD. We will use this for Availability Group communication in our SQL environment.

Connect all instances together to form a private network. This is where we communicate with any additional, internal services like internal DNS.

Connect the shell001 instance to the public Internet.

We have configured the networking in the following way:

| Interface | Network | DHCP | IP |
| --- | --- | --- |
| NIC1 | Heartbeat | Disabled | |
| NIC2 | Private | Disabled | | 
| NIC3 | Public | Enabled | *Assigned by ProfitBricks* |

On NIC1 and NIC2 we are assigning static IPs.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.