Posted on January 11th, 2017 - By Matt Parkinson
Roll the dice testing was an interesting new concept introduced to me in September whilst at Microsoft Ignite 2016 however ever since I’ve found myself explaining it to existing and new customers as a fantastic way to perform disaster recovery testing.
All too often when we think about DR planning, exercising/testing we specifically choose exactly what we are testing which doesn’t accurately reflect a real world scenario of what we might need to recover from. As an example you might have an Exchange DAG setup within your data centre and test that DAG but what if we lost the data centre? Do we test for that? Do we even consider that a possibility?
If there’s anything that working in a data centre environment has taught me over the years it is that we are consistently progressing resiliency options for what we might expect to happen, and we find solutions to what has happened in the past but what do we do when the unexpected happens?
Thinking back to some unexpected outages over the past couple of years one incident that rings true for being unexpected more than any other is the case of an electrician who was working on a power feed he had isolated but had then been enabled again without his knowledge. The electrician received a life threatening shock and the power was immediately isolated causing all systems within the data centre to be shut down for a significant amount of time whilst emergency services were in attendance and even longer whilst an initial investigation into potential wrong-doing was conducted.
In another scenario we know that there is fire suppression within data centres to protect crucial equipment but often this is linked to a shutdown of power and cooling which in some cases can lead to extensive outages.
So the big questions is, how can we possibly provide resiliency in unexpected scenarios such as these examples? I believe roll the dice testing has a large role in determining what to do.
Roll the dice testing initially involves drawing out the different elements of your infrastructure into a 2×3 grid or 2×6 grid and so on depending how many components you think are relevant. You can even go further, having a grid for location/department and another grid for sections within that location/department. As an example you may have your different global offices and data centres and then a 2nd grid for the different racks or suites you operate within that data centre.
The next step is to grab some dice comparative to the size of grid you went for (why we work in sixes in case you hadn’t worked this out!) and treat it as though that part of your company has been completely lost. Now start to work out what systems are down, the DR protocol for recovering that, the expected time for recovery and the impact to the business. Scary? Well we’re not finished yet. Roll the dice again and you’ve now lost another section 30 minutes after the first. What’s the situation now? You wanted to look at the unexpected – would you expect to lose 2 parts of your infrastructure at the same time? My guess would be no, but it happens and those are the events which have the biggest impact.
So, this is in essence a simple way to completely randomise your DR testing and get you thinking about the unexpected. The more diverse you are with testing the better, and if you find a weakness in your DR testing or protocol then come and speak to us and see how we deliver highly resilient and highly available infrastructure from multiple locations at the same time up to 3,500 miles apart. Yes, you could theoretically still lose both data centres at the same time, however, the element of risk is substantially different.
Posted on October 6th, 2016 - By Matt Parkinson
Last week our Technical Director attended Microsoft Ignite 2016 in Atlanta to keep up to date on the latest developments from Microsoft and the technology industry. Matt writes about his experience below and why we attend events such as these.
“Microsoft Ignite for the past 2 years has become a fundamental part of our internal training process and for shaping the direction of the company. You might think that a trip to the US seems like good fun however with 26 hours of seminars to attend during the week it’s a lot of information to take in and a beer was very welcome at the end of each day before getting an early night!
Microsoft Ignite brings together I.T. pros from all over North America and Europe to share knowledge and keep up to date on the latest developments not just from Microsoft but the technology industry in general.
A number of people that haven’t attended before tend to think of Ignite as a Microsoft brainwashing event so I think it’s important to note that the sessions are conducted not just by Microsoft’s own employees but MVP’s and partners which provides several different perspectives on the way the industry is changing. The event also contains a large expo floor to discuss problems and try out new products outside of the formal sessions to get even more perspectives on the way the industry is changing.
Posted on February 11th, 2016 - By Matt Parkinson
Recently a client came to me with a custom dedicated server requirement to run a piece of software called Potree (www.potree.org) for rending 3D point clouds on the web for users to access and view data.
This is a great piece of software however in trying to compile on CentOS Linux I came across the below error. The most important line being “unrecognized command line option ‘-std=c++14’”. In the interest of keeping this short and more on point on how to correct the error it simply means that the version of GCC that ships on CentOS is not a new enough version compare to what the software was built in and doesn’t support C++
Posted on December 24th, 2015 - By Matt Parkinson
The question of whether to go for a dedicated server or collocating your own server or hardware is a question that we are often asked here at VooServers and 90% of the time cost is the main driver for colocation. The other 10% would be made up of flexibility, compliance, ownership and the rather ambiguous ‘other’ category.
With cost being the main driver it’s a good time to reference back to a recent article about cost analysis using Costberg’s as this has a bearing on what we are discussing today.
We will discuss each of these points below but with cost being the main driver we will start there with some common misconceptions.
When looking at the cost it’s easy to see why people are driven towards colocation as at face value you see the one off purchase of an asset which is to last you 3 years and then lower monthly fees which over the 3 year period makes the apparent TCO (Total Cost of Ownership) much lower than a dedicated server of the same specification.
When you look at what a dedicated server offers though and try to bring a collocated server up to the same level of service, the value that a dedicated server offers quickly starts to outweigh colocation.
The first thing is hardware SLA’s. We build servers every day and stock all of the components that we use so should a replacement component ever be require it’s already on site and can be diagnosed and swapped out within 30 minutes 24 hours a day 7 days a week. If you were to look at getting this same type of agreement with HP or Dell it’s generally a 4 hour window and will cost you around £600 a year and you would still need to get one of your staff members to diagnose and arrange the replacement with Dell/HP which not only adds to the cost in terms of time of your engineer but also time that the service is down for.
Even if you are close to your colocation facility I can’t imagine many I.T. managers would want to be diagnosing a server fault at 11PM on a Friday night. With a dedicated server this is all taken care of by our staff without any additional cost to you. If you don’t have an in-house I.T. team or only have 1 or 2 members in I.T. then this starts to become more of a problem as business is gradually turning more and more towards 24×7 and the infrastructure that powers it needs to be available all the time and any issues need to be fixed without delay. In such a scenario we like to consider ourselves an extension of your I.T. team and to assist you in providing I.T. services to your company.
It’s a bit of a misconception that there is less flexibility with a dedicated server than there is with colocation. In fact, I’d probably say a dedicated server gives you more flexibility especially with a provider such as VooServers who specialise in providing custom dedicated server and can understand your needs and build something to your exact specifications.
With a dedicated server you can also choose one specification to start with and upscale as and when necessary either through upgrading individual components within the same server or by replacing the whole server altogether. Although upgrading components is also possible in colocation if you wanted to replace the entire server you would again have to pay out the entire cost of the server and in most circumstances that happens before the initial 3-year period that the server was expected to last and budgeted for expires.
Compliance is something that has grown significantly as a concern over the past few years with more and more people becoming aware of offsite I.T. infrastructure possibilities. Many I.T. managers see that compliance is something that they need to handle entirely themselves however VooServers have experience with a number of different regulators such as PCI or ISO 27001.
With our experience we can be an extension of your I.T. team when it comes to compliancy rather than in co-location where you need to undertake the entire process yourself. Particularly in smaller companies this is a huge benefit as most companies these days want to be able to process payments online but don’t have the experience or budget to audit the process themselves.
Ownership can be another tricky area but a question that is starting to decline as more and more of the large companies become more accepting of online services.
There are 2 main areas when it comes to ownership and those are assets and data. The asset side of things colocation gives you full ownership of the asset however with that in brings the caveats discussed earlier in cost and flexibility and you need to ask yourself whether owning that asset really offers you much benefit?
Data ownership on the other hand is a very big issue as it is what holds the actual value of the company. Without the data the business is nothing and it’s paramount to protect the ownership of that and this is what drives a number of people towards co-location as there are no questions around the ownership and it’s clear to demonstrate to other senior members of staff as that’s what everyone has been used to for the past 15 years.
There are a number of online services which can be vague on this topic and how they may use your data however at VooServers in our terms of service we explicitly set out that the ownership of the data remains your own under all circumstances and that we will not use that in any way other than to provide you with your own service giving you the same ownership as a collocated server would offer.
The other category isn’t something we can easily cover here as it covers almost anything that we haven’t already discussed and that’s where I invite you to give us a call on 0800 0803 200 or e-mail us on firstname.lastname@example.org with your question and we will get back to you with a personalised response.
I hope to have demonstrated in this article that colocation really doesn’t offer you any benefits over a dedicated server and my own personal opinion would be that going for a dedicated server gives you a lot more options and better value for money. Of course you might argue that would always be my response as we are a dedicated server provider however you will often find the same answers from our customers.
Posted on December 11th, 2015 - By Matt Parkinson
You might wonder why on an I.T. blog we are asking whether you are prepared for winter when it’s a question normally reserved for your boiler man or garage.
Winter brings with it the possibility of a number of different adverse weather conditions whether that be snow, high winds or flooding and with that the interruption to business also starts to become a bigger possibility. It’s estimated that disruption from snow costs the UK economy over 500 million pounds a day with early reports of the recent floods in Cornwall alone reported at 6 million.
Here at VooServers we are specialists in data centre based solutions that can play a key part in keeping your workforce operational during unforeseen circumstances.
The biggest and most important thing is to plan. It doesn’t have to be complicated, it doesn’t have to be costly, but it does have to exist! A plan can be as simple as who should notify the staff of the issue, who to redirect the phones to and what responsibilities people should undertake if they are not able to complete their normal line of work.
If you already work with VooServers you are probably aware that we offer Skype for Business, Microsoft Exchange and Microsoft SharePoint all of which can enable your business to remain fully operational with your users working from home, keeping them working in the same manner as they would when they are in the office to ensure the most productivity possible. It is important however to make sure that users have access to the software they will need to install and have clear instructions on how to connect to these services once the software is installed which should be a part of your plan.
If you don’t have existing access to these technologies, then talk to us today for a free no obligation audit and proposal.
Without these technologies the easiest and cheapest step you can take to enable your users to work from home is to allow them to connect remotely to their normal work PC’s. In most circumstances businesses already have the capability to do this with their existing setup, however it just needs to be enabled. A worst case scenario is a replacement router which start from under £100. If you are purely contingency planning, then this is probably the cheapest step to take but the efficiencies are not as great as the technologies mentioned above as it’s not the user’s normal way of working and when it’s required it still needs to be setup on each person’s home computer.
VooServers are able to offer a number of solutions not just to fulfil contingency plans but to enable greater working capabilities all of the time. Call us to today on 0800 0803 200 or e-mail us on email@example.com to chat to one of our engineers today about the possibilities.
Posted on October 23rd, 2015 - By Matt Parkinson
When designing failover solutions we often look at recovery time objectives (RTO) and recovery point objectives (RPO) to determine what the requirements for failover are.
RTO is the time in which a recovery from a failure should take and for most companies we work with this is under 30 minutes. Given that an average diagnosis can take 10 minutes this only leaves a further 20 minutes at a maximum to fix the service and bring it back online.
RPO is the acceptable data loss during a recovery operation. So for instance if relying solely on a backup then your RPO might be 1 hour and the backup policy should be run every hour or less to comply with this. Note though that a backup/restore method often has significant impact on the RTO.
These 2 figures often work together to determine how the service should be designed as you can’t use a traditional point in time backup to achieve an RTO of 5 minutes but it may still achieve your RPO of 15 minutes. Likewise, you might have an RPO of 30 seconds but an RTO of 1 hour meaning that data has to be as current as possible but its not too important if it takes a while to recover providing its there.
Here at VooServers we work with a number of different technologies to design services that meet any RTO or RPO that is defined. We have a proven track record of delivering services across multiple sites with an RTO as low as 2 minutes and an RPO as low as 10 seconds.
Posted on September 9th, 2015 - By Matt Parkinson
A couple of years ago whilst networking at a conference I started chatting to someone about the difference between cost, price and value and first heard the word “costberg”. It intrigued me and it’s actually a very simple approach.
As much as 90% of an iceberg is below the surface of the water and it is this principal that was being applied when talking about a costberg. Whilst we often quote for fully hosted or a hybrid infrastructure, the client may perceive the ‘on premise solution’ as more cost effective, however quite often they forget about the hidden costs.
With an ‘on premise system’ the perception is often that there is a high initial outlay on hardware and licensing but then very minimal cost for the lifetime of that system which is seen as your electricity and your network connection. Ok, some people will also think of the support costs for that system or software maintenance agreements and so on. But what if you really start to peel back the deeper costs? Perhaps the thermal output of the infrastructure and how that will affect heating/cooling requirements and costs? Maybe price of the square foot where the server is sitting in the office?
Posted on August 6th, 2015 - By Matt Parkinson
Due to some uptake in our Outlook plugin for WHMCS we have decided to continue development and release version 1.1 today which contains the following new features: WHMCS Outlook Plugin v1.1 (517.7 KiB, 1,101 hits)
The ClickOnce installer is now also signed as a few people were seeing errors where the installer was not previously signed.
The name of the assembly has also been modified so if you have installed the previous version you will need to remove it from add/remove programs before installing the new version to avoid both plugins being installed at the same time.
WHMCS Outlook Plugin v1.1 (517.7 KiB, 1,101 hits)
Posted on June 12th, 2015 - By Matt Parkinson
Here at VooServers we are heavy users of both Microsoft Dynamics CRM and WHMCS and whilst both have their positives and negatives one of the great things about Dynamics CRM is its Outlook integration and in particular the ability to convert an e-mail you have received to a Case, Opportunity, Lead etc.
To date there is no Outlook plugin for WHMCS that is openly available and that offers the functionality to convert e-mails to support tickets so our Technical Director Matt Parkinson embarked on the task of putting something together.
You can find the download for the plugin at the end of this article which is designed to work with Outlook 2010 onwards. Once it is installed you will see it appear as a new tab on your ribbon in Outlook.
The first thing you will need to do is go to the WHMCS tab and press the “Configuration” button which will open a new window for you to enter your WHMCS details. The URL should be entered in the format of https://whmcsinstall.com/includes/api.php. The username and password are the ones that you would use to login to the WHMCS admin portal.
Please note that for the plugin to work your IP address will need to be authorised for API access and you will need to ensure the role assigned to your user in WHMCS has api access otherwise the functions will not work.
Once configured if you select an e-mail in your Outlook and press the “Convert To Ticket” button it will take the sender e-mail address, locate the client in WHMCS and create a ticket under their client account with the e-mail body.
Currently the plugin has only been put together very quickly so errors and bugs are likely to be present. If there is enough demand for the plugin then we will continue development with better error handling and addditional features.
Version 1.1 is now available from here
Posted on February 3rd, 2015 - By Matt Parkinson
With cloud services being rapidly adopted it’s important to look at how traditional enterprise applications can be transitioned to a public, private and even hybrid cloud configuration allowing companies to achieve greater uptime and resiliency than ever before. The adoption of cloud services for enterprise applications such as Microsoft Exchange, Microsoft Lync and Dynamics CRM also rids traditional initial outlay fees seen with on-premise or self-hosted services to satisfy both financial and technical decision makers.
Here at VooServers we are specialists in transitioning all sizes of businesses to the cloud ranging from a few individual users in a shared hosted exchange platform to several hundred users on a private cloud platform with integrated Exchange, Lync and CRM platforms to allow collaboration and unified communications across the workforce.
Although many applications such as Exchange and Lync can be natively configured for high availability through database availability groups and client access server clusters we take that high availability one step further by deploying the server infrastructure to our OnApp powered clouds. This provides us with resiliency at every level from data centre power right through to the software itself. We can even go as far as configuring cross data centre or event continent failover should an entire country’s or continent’s infrastructure be crippled which although it seems difficult to imagine there is always the possibility.
OnApp provides us with a three tiered approach to resiliency first of all with the use of its integrated storage platform. The IS platform provides us with a highly available and redundant SAN for our virtual disks to sit on. You can read more about IS at [link to previous blog post]. We then layer on 2 separate hyper-visor zones which contain virtual machines running the various software packages in their high availability configurations in software. Lastly we then have OnApp load balancers provisioned at the top of the infrastructure to distribute the load under normal working conditions and provide redundancy in the event of a server failure.
If you are interested in transitioning your company to the cloud and reaping the benefits both technically and financially contact us today for a personalised hosting proposal including detailed technical diagrams and information of how your company would look in the cloud.
Read more about how VooServers adapt applications to the cloud over at OnApp