Posted on April 13th, 2017 - By Nick Stears
The hyper-converged market is rapidly growing in today’s climate; the market alone is worth upwards of one billion pounds and is thought to be growing at a rate of 150% every year. Those that do not understand what converged and hyper-converged infrastructure are I will try to explain in its simplest form.
With a non-converged infrastructure you have for example a virtualisation server (with Hyper-V/Xen/KVM etc), which would then connect to some form of data storage via direct attached storage (DAS), storage area network (SAN) or a network attached storage (NAS) device. The virtual machines’ disks are hosted completely separately from the virtualisation server. The storage device will have some form of raid configured and optimised for performance and redundancy, but the key is they are completely separate; you would generally connect multiple virtualisation servers to the storage array.
With a hyper-converged infrastructure, everything is rolled into one, disks are stored on the same server with a storage controller running as a service on each node (you need a minimum of two nodes) which means you can scale your cluster while also maintaining the redundancy and resiliency that a storage device gives you. Storage is then abstracted as a separate layer which is used to create virtual san’s within the same hardware, demonstrated in the picture below:
With Windows Server 2016, hyper-converged is made possible by a feature called Storage Spaces Direct. This technology allows multiple nodes within the same cluster to see each disk as if it was is its own (from every node available in the cluster). Storage Spaces Direct also makes sure that every disk is resilient, so there are at least 2 copies of the data split across multiple nodes. If there is a faulty node or faulty disk, the data from that disk is still intact elsewhere. Storage Spaces Direct acts as a storage controller, which replaces the need for physical hardware raid, although you can still use both for performance gains.
The amount of nodes you have in your cluster determines how resilient you can make your infrastructure and how efficient it is. In a simple 2-node cluster, which is the lowest entry point in this infrastructure (for obvious reasons) you would be limited to a two-way mirror, this allows for a complete failure of one node. To determine which machine is live in the event of a failure you do require an external witness server, this can be anything within your network outside of your cluster, but could also be externally cloud based. A witness server is used to make the deciding vote when a node fails, so if node 1 can not communicate with node 2, and visa versa, the external witness will have the final say as to which node should be active. It is used primarily in even node clusters to ensure that there is always a majority vote in the event of a hardware or network failure with one or multiple nodes. When setting up a 3-node cluster Microsoft recommend a 3-way mirror, this allows failure of one node as well as a failed disk on a second node simultaneously, so an extra layer of redundancy compared to the 2-node cluster, a witness server is not required for this setup as 3 nodes are used so there will always be a deciding vote. A 4-Node cluster allows for dual parity, which adds another layer of redundancy and is the recommended setup by Microsoft for optimal performance. There are also figures to suggest that a four Node-Cluster is 50% more efficient, an 8-Node is 66% more efficient and a 16-Node improves efficiency up to 80% with a full SSD configuration.
There are many other features under the hood that deserve an honourable mention. One example is that you can set priorities on Virtual machines. If you need to take one node down and you do not have the memory to fail all VM’s to a different node you can set different priority levels. What this means is that when a node is put into maintenance, it will always prioritise moving the VM with the highest priority to the next available node. If there is an insufficient amount of memory to move all of the virtual machines then those with a lower priority will pause until the original node is brought back online.
The health service which monitors the state of the drives in your nodes has also been improved over 2012 R2, if a disk fails for any particular reason the end user will be notified, the disk will be highlighted within the node and can be replaced and rebuilt without any intervention (other than physically replacing the disk).
This was just a small glimpse into hyper-convergence and we look forward to rolling this out in the coming months to many of you! If you are interested in this technology, feel free to contact us by email at email@example.com or call us on 0800 0803 200 for more information to discuss your requirements.
Posted on July 22nd, 2015 - By Nick Stears
In our industry, uptime is absolutely everything, we need to ensure services are online 100% of the time regardless of issues with network or hardware. To ensure we meet such high standards, when they are required, we use ARCserve HyperV High Availability and Assured Recovery to ensure we are never vulnerable to downtime. VooServers prides itself on having multiple points of presence, which gives us the flexibility of offering customers the option of additional failover to another site in the event of a failure at power or network level. The concept is relatively simple, both servers must be running the same operating system, have the ARCserve engine driver running and HyperV installed:
Posted on May 12th, 2015 - By Nick Stears
Backing up laptops and office computers for most companies is imperative, whether it be your accountant with important financial documents, your designer with important website content or your Managing Directors email data, it’s crucial that your data is safe and recoverable. It is equally important however for companies to do their bit for the environment and ensure workstations are switched off when not in use to cut down on power costs and energy. The main problem with this is when; laptop lids get shut, pc’s get set on standby, your precious data never likely gets a chance to back itself up properly. With Backup2Go none of this matters, your workstations will backup wherever they are, at whatever time, providing it has internet access and power. It can do all this without the need to begin a backup from scratch. With very little administration required, Backup2Go is software you can depend on.
One of the reasons we started to look at Backup2Go is it seamlessly supports Windows, Macs and Linux based operating systems, a must for offices who give their employees the choice of OS. Once the agent is downloaded and installed, a quick 5 minute setup to configure which server to back up to and you are ready to go. With Backup2Go the administration is a doddle. Once you have your P5 backup server setup and a template in place you can begin assigning agents to it. The configurable options on the templates are extremely useful:
When the backups are running, you can conveniently administrate all of your agents in the P5 overview control panel where a traffic light system is used to view the current status of all the running agents. This makes it clear to see how long ago it has been since the last successful backup or if there are any problems.
Restoring files is equally as simple and can be done on the user’s workstation or from the P5 administration panel. This takes away the hassle of having to contact your backup admin as users can simply login to the agent themselves and restore their files from any given time (displayed on the right of the screenshot below).
We have been very impressed with the flexibility Backup2Go offers, it allows us to manage backups of client’s laptops and workstations with the freedom of knowing that their data is being securely backed up regardless of where they are.
Posted on March 13th, 2015 - By Nick Stears
For any IT related company in ownership of IP space using Cisco hardware, it is usually good practice to assign specific subnets to clients in their own vlan, with IPv4 space becoming sparse this will usually be something as low as a /30 or even a /32. Although this makes administration a lot easier there are still companies that distribute entire /24 blocks in one vlan and distribute a few IP’s from this pool, trusting that they will only use the IP’s they have been assigned. This is all well and good however you are purely relying on trust here, and from time to time clients may attempt to take advantage of the situation and use additional IP’s that do not belong to them. There is also the scenario where you simply may have mistakenly assigned somebody the wrong IP address. Fortunately we can quite easily track the culprit port on the Cisco with a few simple commands.
The first step is to ping the IP in question from another IP within that subnet, this is important as it will show up in the arp table and display the mac address we require. Once you have sent a ping to the port in question, in windows command prompt type arp – a, in Linux simply just type arp. Arp stands for address resolution protocol and is used for resolution of network layer addresses into link layer addresses, here we will be able to see the IP we have just pinged and the mac address we have associated with it.
The next thing we need to do is login to our cisco devices, here we have logged onto our router and broke the 12 character mac address into 3 sections of 4 separated by a dot for Cisco to understand:
Posted on February 18th, 2015 - By Nick Stears
Here at Vooservers we use OwnCloud as a simple way of uploading and storing files on a server for ease of distributing direct link downloads to our clients, but as we begin to explore OwnCloud in more detail we found there’s a lot more under the hood then initially meets the eye.
Posted on January 23rd, 2015 - By Nick Stears
Keeping your data safe is imperative for most organisations, people say ‘you can never have enough backups’ and that maybe a little cliché however it really is genuinely a good ethos to adapt if you are concerned about data loss. Here at VooServers we use R1soft’s CDP as it gives us the flexibility and customisation needed to ensure we have everything backed up and stored in one centralised location.
There are three fundamental parts to CDP server, the server, the disk safe and the policy. Adding the server is a relatively painless task, it’s simply a matter of installing a small agent on the destination server (compatible with Linux, Windows amongst others), add a key to allow the two servers to authenticate and your all set.
Creating the disk safe is just as simple and allows you the flexibly of adding either a single disk, multiple disks, or automatically add all disks to the server. This is obviously quite handy if you are regularly adding new disks to a system as it saves a lot of administrative time.
The final and most crucial part of the procedure is adding and configuring the policies. This is where R1soft shines, simply select the server, select the disk safe and then customise the policy exactly how you want it. For instance you can set the backup frequency to daily, hourly, minutely and then set the amount of recovery points to how many days’ worth of backups you require. To give you an example if you backed up a server every hour and you wanted to hold the data for a month, you would set 744 recovery points (24 in a day X 31 days in a month). This allows you to restore from 744 different points in time over the space of a month, giving you ultimate flexibility, of course you can back up/retain more frequently if you prefer.
Other than the scheduling and retention periods, there are further customisations that can be made, for instance you can exclude certain files/directories from backups, you can backup individual databases via MySql/SQL add-ons, there is also an add-on to backup all Exchange instances or Cpanel accounts.
Let’s take a typical case study, you have a multiple user Cpanel server with a dozen or so SQL databases. A user makes an error and corrupts his SQL database, taking his website offline potentially losing revenue. With R1soft CDP we can simply select a restore point (usually a few moments before the error has been made), find the correct database, click restore and the database will be back to a working state within minutes! With this type of backup power at our disposal we can feel confident in the knowledge that any mistake can be put right with just a couple of clicks!