Matt Parkinson, Author at VooServers
Reliable hardware  -  Trained Staff

Oracle OpenWorld 2019 – Day 4

Horizontal White Line


You are here:  Support / Technical Blog

Posted on  - By

Oracle Exadata

It’s the 4th and final day from the Moscone Center in downtown San Francisco for Oracle OpenWorld. Despite being the final day and noticeably fewer people than previous days a packed schedule awaited with a further 6 sessions primarily covering information security utilising Oracle products.

When we talk about information security we categorise this into 3 areas which are availability, integrity and confidentiality so we’re looking at the protection of the data, the accuracy and consistency of the data, and last but not least the ability to access that data. This means that we’re covering sessions on high availability and disaster recovery in addition to data protection in today’s run-down of events.

The first session of the day kicked off with new features of enterprise manager to capture important events and notifications from Oracle Database systems. Enterprise manager acts as a single pane of glass for viewing multiple database environments whether they are on-premises, in another cloud, or in Oracle cloud.

Enterprise manager comes with some default monitoring templates but custom templates can also be created and pushed out to all of the monitored hosts to make deployment and management simple and easy to change.

New in enterprise manager is the ability to group events into a single notification rather than receiving an alert for each. This is particularly useful during maintenance windows such as when a node is taken offline and it is expected that several the related monitors are going to trigger. Another new feature is the detection of runaway SQL queries with the ability to automatically kill the runaway SQL. This feature will work looking for hung processes as well as SQL queries consuming more resources than defined and will take corrective actions to rectify them. This is something that we monitor separately ourselves but it will be a great addition to enterprise manager as well.

Next up for the day was covering the most important security features of Oracle database and how to keep data secure. This session primarily covered features that have been around for a while but either not configured to their full use or that are little known so people are just not using them. Oracle talk about 3 fundamental steps in security of the database which is to assess your current state, detect improper access to data and to prevent improper access to data.

The main features to take away from this session that we will be working to roll-out to our Oracle databases is the enabling of unified audit which combines all of the current 7 audit locations into a single audit trail. The other feature is network encryption which allows for client connections to be encrypted and data protected better whilst in transit. There were also a number of other tuning steps taken from this session and we will be working to roll out the applicable features to customers in the coming weeks.

The final session for the day and to mention is the number of tools that Oracle are making available to assess the database security and availability options. One of the key tools talked about was the database security assessment tool which assesses the configuration, identified risky users, discovers sensitive data and provides assessment reports which can be used to tune the system. We will be running this on all of our environments and making recommendations based on the reports for increasing both data security and availability.


That’s it for Oracle OpenWorld 2019 and my time reporting on the information available from the event. It’s been a fantastic conference and a lot learnt which we are looking forward to bringing to you in the coming weeks. There is also a lot of information that hasn’t been mentioned to avoid too much to read however if you would like to know more about the event and additional information please contact us using the contact page or phone number and we will be happy to discuss things relative to your environment.





Posted on  - By

It’s day 3 of Oracle OpenWorld in San Francisco and we’re back gathering all of the latest information for you from Oracle’s annual conference being held at the Moscone Center. Today we will be bringing you information primarily around Oracle Linux which sits at the core of many of the Oracle products.

The first notable session of the day was a solution keynote speech on Oracle’s infrastructure strategy for the cloud and on-premise systems. The session got off to a start with re-iterating what’s been a common theme each day this week in that whether you want to run in Oracle Cloud Infrastructure, In other clouds or on-premise then the entire suite of products are available and in the exact same format as on the Oracle Cloud. One the products used to demonstrate this was Exadata Gen 2 which is used as the foundation for OCI with the exact same hardware and software available for cloud at customer from Oracle. The tag line “we use the same as our customers do” was also made reference to as to further re-enforce this ethos.

The solution keynote also covered a number of innovations in Oracle Linux with the most important being Oracle Autonomous Linux which was officially announced on Monday during Larry Ellison’s keynote speech. This new version of Oracle Linux primarily addresses security concerns by proactively addressing security issues and ensuring that the operating system stays up to date by itself and best of all without any updates or downtime required.

In addition to kernel level updates Oracle Autonomous Linux is also able to patch user space packages such as previously high-profile vulnerabilities in glibc and openssl. Not only are the vulnerabilities patched but tripwires are also inserted so that should a user or process try to exploit the vulnerability an audit entry is created and can be notified to the system maintainer. The biggest aim with Oracle Autonomous Linux is moving systems to always being up to date and as secure as possible by removing the require human labour and thus the possibility of human error.

Later on in the day we continued with further information on Oracle Linux but this time on optimisation of the system for getting the most performance out of Oracle Database. This session primarily addressed memory management such as the use of huge pages and customising the system swapiness to ensure the database gets as much use of the available memory as possible. This session also covered some additional support tools which can be used for gathering system information for troubleshooting. We’ll be working to bring all of the key points from this session to our Oracle environments as soon as possible.

In between sessions we also took some time to visit the exhibition zone and talk with both current and potential future partners and providers such as DBVisit, Nutanix and Solarwinds. Nutanix easily had the best area in the exhibition zone, and not because of anything technical, because they had puppies! All of the puppies had been rescued from trouble and were there for some TLC from many willing attendees and were being extremely well looked after.


OpenWorld Exhibition Zone

Day 3 also finishes with the conference attendee party known as CloudFest which is being held at the Chase Center and features performances from John Mayer and Flo Rida. Attendee party’s are always a great way to reflect on the information from the week and catch up with new and old connections over a few beers but unfortunately tend to fall before all the work is done and the final day can be a bit of a struggle in the morning!


Chase Center

We’ll be back tomorrow for the 4th and final day of Oracle OpenWorld including our round-up of the whole week and the most exciting things we’ll be bringing to you in the coming weeks that we’ve been introduced to this week.





Posted on  - By

We were back at the Moscone Center in San Francisco today attending Oracle OpenWorld 2019 and obtaining the latest information on all things Oracle. The day was planned to be quite variant with information on MySQL, Oracle Database and Oracle Cloud across 6 different sessions.

The day started with a tutorial on InnoDB clusters which is a relatively new feature in MySQL since Oracle purchased it. InnoDB Clusters offer high availability on MySQL databases with an ease of configuration ethos which was clear to see from the demo’s given. What’s best about InnoDB Clusters is that they are even included in the community edition of MySQL with the 3 main components of the cluster system also being open source.

The number of websites that run MySQL is phenomenal so being able to add an enterprise feature in high availability to protect a database is a huge advantage for many small and medium businesses that rely on MySQL for their databases. What’s more is that in the demos the downtime for a failure was 5 seconds which for a community database system is incredible.

Oracle OpenWorld 2019

The next big item for the day was Oracle Database In-Memory which allows data in tables to be held in-memory for faster access than the storage system. The in-memory options significantly improve database performance and combined with automatic in-memory management from Oracle 18c it’s getting much easier to create significantly faster databases on the same hardware.

The last item to note for the day is new functionality for RMAN to allow backups to the Oracle Cloud using the DB Backup Cloud Module. This feature allows easy archival of backups offsite and into the Oracle cloud block storage platform which can help to meet a number of compliance requirements such as for an end of quarter or end of year archive without impacting on space on the local RMAN servers.

What was interesting about the RMAN backups to the cloud is that it’s a clear way to make the most of cloud alongside existing environments or in multi-cloud environments. This is something that we are always keen to see as making the best of each environment to come up with an overall multi-cloud solution is something that we’re always interested in achieving for our clients. Finally for the day, Oracle made clear in a keynote again that they are working to deliver all of their products to the location you want to consume it and are not just pushing their own cloud services. Although Microsoft have also started to take this stance in the past couple of years it’s not as far ahead as Oracle appear to be.





Posted on  - By

September to December is what we like to know as conference season with several annual conferences and tech events taking place in these months. The most notable in recent years has been Microsoft Ignite which typically takes place in the last 2 weeks of September, or at least for the past few years.

This year Microsoft Ignite has been moved til November which has paved the way for us to attend one of our other biggest product’s conference, Oracle OpenWorld, which kicked off today in San Francisco and our technical director Matt Parkinson is there to pick up the latest news and developments that will shape our Oracle services in the coming year.

Oracle has been one of our fastest growing product lines in recent years and you may have also seen that we’ve recently been accepted onto the G-Cloud framework for our Oracle services which makes attendance to the conference this year even more beneficial.

Part of our managed services offerings is that we attend these events for our customers to obtain information that is relevant to them and ultimately us to move services forwards together. We’ll be working hard to bring the latest updates in a new blog post every day this week from the conference so be sure to check back each day for the latest post. You can also follow the hashtag #OOW19 on twitter for information from ourselves and other Oracle partners through the week.

Oracle OpenWorld 2019

Day 1 got off to a packed start with 6 general sessions attended and a keynote speech from Larry Ellison, co-founder and CTO of Oracle. The sessions today covered a variety of different topics from new features in Oracle Database 19 to migration paths onto Oracle Cloud Infrastructure.

One of the early sessions covered 18 different methods to move to Oracle Cloud Infrastructure which varied in complexity, downtime and features. The key takeaway was that a lot of the similar methods for doing migrations on other infrastructure continue through to OCI making it easy to migrate. It was also clear that dependent on your infrastructure and requirements there is an option to suit.

Another key session today was around innovation of database in 12c and 18c which discussed some of the features available in newer versions of Oracle Database. The key points to note were the increased security ethos with tools like privilege capture which can be used to analyse privileges and revoke privileges which are not required. In addition the introduction of the unified audit trail makes it far easier to keep set policies for auditing and to review the resulting logs.

The final thing to touch on today was the keynote speech in which several new announcements were made such as Oracle Autonomous Linux being released which is reported as the world’s first fully autonomous operating system. Oracle Autonomous Linux performs a number of system management tasks using machine learning to learn about the system and it’s function and adapting the configuration to suit. The aim is to remove human interaction from the management of the servers and reduce the risk of an accidental configuration change or unexpected result causing downtime.

What was clear from the keynote as well was that although Oracle’s preference is now Oracle Cloud they recognise the need to allow their products to be run in the environment that the client desires. This ultimately means there is a lot of support for Oracle products in other environments such as other clouds, on-premises systems or even a hybrid scenario. This was also further enhanced by the announcement that Oracle are aiming to bring the autonomous database and other generation 2 cloud features to their customer hosted solutions. This allows you to run the same environments and products as Oracle Cloud Infrastructure but in any environment.

Lastly there were a number of technical configuration changes gathered from today and some new and old features which are starting to form part of our action points for implementing after the conference to make our Oracle environments greater. One of our account managers will be in touch with any Oracle customers after the conference to discuss these action points and rolling them out to you.





Posted on  - By

Roll the dice testing was an interesting new concept introduced to me in September whilst at Microsoft Ignite 2016 however ever since I’ve found myself explaining it to existing and new customers as a fantastic way to perform disaster recovery testing.

All too often when we think about DR planning, exercising/testing we specifically choose exactly what we are testing which doesn’t accurately reflect a real world scenario of what we might need to recover from. As an example you might have an Exchange DAG setup within your data centre and test that DAG but what if we lost the data centre? Do we test for that? Do we even consider that a possibility?

If there’s anything that working in a data centre environment has taught me over the years it is that we are consistently progressing resiliency options for what we might expect to happen, and we find solutions to what has happened in the past but what do we do when the unexpected happens?

Thinking back to some unexpected outages over the past couple of years one incident that rings true for being unexpected more than any other is the case of an electrician who was working on a power feed he had isolated but had then been enabled again without his knowledge. The electrician received a life threatening shock and the power was immediately isolated causing all systems within the data centre to be shut down for a significant amount of time whilst emergency services were in attendance and even longer whilst an initial investigation into potential wrong-doing was conducted.

In another scenario we know that there is fire suppression within data centres to protect crucial equipment but often this is linked to a shutdown of power and cooling which in some cases can lead to extensive outages.

So the big questions is, how can we possibly provide resiliency in unexpected scenarios such as these examples? I believe roll the dice testing has a large role in determining what to do.

Roll the dice testing initially involves drawing out the different elements of your infrastructure into a 2×3 grid or 2×6 grid and so on depending how many components you think are relevant. You can even go further, having a grid for location/department and another grid for sections within that location/department. As an example you may have your different global offices and data centres and then a 2nd grid for the different racks or suites you operate within that data centre.

The next step is to grab some dice comparative to the size of grid you went for (why we work in sixes in case you hadn’t worked this out!) and treat it as though that part of your company has been completely lost. Now start to work out what systems are down, the DR protocol for recovering that, the expected time for recovery and the impact to the business. Scary? Well we’re not finished yet. Roll the dice again and you’ve now lost another section 30 minutes after the first. What’s the situation now? You wanted to look at the unexpected – would you expect to lose 2 parts of your infrastructure at the same time? My guess would be no, but it happens and those are the events which have the biggest impact.

So, this is in essence a simple way to completely randomise your DR testing and get you thinking about the unexpected. The more diverse you are with testing the better, and if you find a weakness in your DR testing or protocol then come and speak to us and see how we deliver highly resilient and highly available infrastructure from multiple locations at the same time up to 3,500 miles apart. Yes, you could theoretically still lose both data centres at the same time, however, the element of risk is substantially different.





Posted on  - By

Last week our Technical Director attended Microsoft Ignite 2016 in Atlanta to keep up to date on the latest developments from Microsoft and the technology industry. Matt writes about his experience below and why we attend events such as these.

“Microsoft Ignite for the past 2 years has become a fundamental part of our internal training process and for shaping the direction of the company. You might think that a trip to the US seems like good fun however with 26 hours of seminars to attend during the week it’s a lot of information to take in and a beer was very welcome at the end of each day before getting an early night!

Microsoft Ignite brings together I.T. pros from all over North America and Europe to share knowledge and keep up to date on the latest developments not just from Microsoft but the technology industry in general.

A number of people that haven’t attended before tend to think of Ignite as a Microsoft brainwashing event so I think it’s important to note that the sessions are conducted not just by Microsoft’s own employees but MVP’s and partners which provides several different perspectives on the way the industry is changing. The event also contains a large expo floor to discuss problems and try out new products outside of the formal sessions to get even more perspectives on the way the industry is changing.

Microsoft Ignite 2016


The week started with 2 keynotes spanning 3 hours, 1 including Microsoft CEO Satya Nadella, which can be found here and displayed a number of developing technologies such as Microsoft Hololens and Cortana. Exciting technologies indeed, however probably still a few years away from some real uptake.

After the keynotes a welcome drinks reception is held within the Expo hall which was the first insight to the new technologies that partners were bringing this year and the first chance to get into good detail on some upcoming projects with the MVP’s.

The week then continues with 3 days of seminars on the whole fleet of Microsoft products ranging from service orientated technologies such as CRM, Exchange and SQL to infrastructure technologies, this year most notably Azure Stack and Storage Spaces Direct, were of particular interest for me.

Although every evening there are several Vendor parties to go to as well as the Microsoft Certified Professional party, this year was really all too tiring for me to even think about going to a party in the evening except for Thursday night when the official attendee celebration is held. This is an extraordinary night that Microsoft put on and always amazes me the lengths that they go to in order to make it a night to remember. What I like about the celebration is that they incorporate local music, food and drinks ensuring that local trades benefit from the conference being in town which deserves a round of applause in itself.

I must admit I didn’t drink too much myself at the attendee celebration however going into the final morning of sessions on Friday there was certainly less enthusiasm from attendees than previous days! I’d like to say it was the thought of people going home however the final speakers made good light of it and had some ‘swag’ to get them awake.

Microsoft Ignite 2016


With the mention of ‘swag’ this is one of the hot topics of the event. As an attendee you receive a rucksack, water bottle and cable pouch at the beginning of the week however on my first day to the expo hall in about 30 minutes I was offered several t-shirts, mobile power packs, beer cosies and much more. Some people seem to think the more the better and I even heard of one person packing hand luggage only as he picks up the free t-shirts during the week to wear instead. Myself, I picked up a couple of t-shirts for the on-going DIY projects I have at home and a couple of charging cables that always come in use but the left I rest to the swag hunters.

Lastly on a summary of my takeaway from the event and what it means for VooServers – our main focus at VooServers for the past few years has been around innovating new high-availability and failover services and some of the key sessions that I attended relating to these technologies are going to help us to develop our HA services even further. Over the next few weeks myself and the team will be releasing a number of articles on both old and new technologies from the event that are important to our clients and to developing the future infrastructure requirements for us to deliver highly resilient services on.

Check back soon for further details from myself and the team!”





Posted on  - By

Recently a client came to me with a custom dedicated server requirement to run a piece of software called Potree (www.potree.org) for rending 3D point clouds on the web for users to access and view data.

This is a great piece of software however in trying to compile on CentOS Linux I came across the below error. The most important line being “unrecognized command line option ‘-std=c++14’”. In the interest of keeping this short and more on point on how to correct the error it simply means that the version of GCC that ships on CentOS is not a new enough version compare to what the software was built in and doesn’t support C++


Potree Converter Install


Using a blank install of CentOS below are the exact steps that I took to rectify this and compile LASZip and Potree Converter.

  1. yum install libmpc-devel mpfr-devel gmp-devel gcc gcc-c++ git cmake boost-devel
  2. cd /usr/local/src
  3. wget https://ftp.gnu.org/gnu/gcc/gcc-4.9.2/gcc-4.9.2.tar.gz
  4. tar -xvzf gcc-4.9.2.tar.gz
  5. rm -rf gcc-4.9.2.tar.gz
  6. cd gcc-4.9.2
  7. mkdir build
  8. ./configure –enable-languages=c,c++ –disable-multilib –prefix=/usr/local/src/gcc-4.9.2/build
  9. make
  10. make install
  11. cd /usr/local/src
  12. mkdir lastools
  13. cd lastools
  14. git clone https://github.com/m-schuetz/LAStools.git master
  15. cd master/LASzip/
  16. mkdir build
  17. cd build
  18. cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=/usr/local/src/gcc-4.9.2/build/bin/g++ -DCMAKE_CC_COMPILER=/usr/local/src/gcc-4.9.2/build/bin/gcc ..


At this point you should receive the output below instead of the previous error and you can then run on to running make.

Potree Converter Install


  1. make


Potree Converter Install


LASZip has now been built and we will move on to running similar steps to compile PotreeConverter.

  1. cd /usr/local/src
  2. mkdir PortreeConverter
  3. cd PortreeConverter
  4. git clone https://github.com/potree/PotreeConverter.git master
  5. cd master
  6. mkdir build
  7. cd build
  8. cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=/usr/local/src/gcc-4.9.2/build/bin/g++ -DCMAKE_CC_COMPILER=/usr/local/src/gcc-4.9.2/build/bin/gcc -DLASZIP_INCLUDE_DIRS=/usr/local/src/lastools/master/LASzip/dll -DLASZIP_LIBRARY=/usr/local/src/lastools/master/LASzip/build/src/liblaszip.so ..
  9. make


Potree Converter Install


PotreeConverter has now been built and we can move it into one of the system binary locations so that we can use it from any directory we are working in later on.

  1. cp PotreeConverter/PotreeConverter /usr/bin/


If you try and run PotreeConverter now you will likely see the below error message which requires a quick fix.

PotreeConverter: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20′ not found (required by PotreeConverter)


  1. rm -rf /lib64/libstdc++.so.6
  2. cp /usr/local/src/gcc-4.9.2/x86_64-unknown-linux-gnu/libstdc++-v3/src/.libs/libstdc++.so.6.0.20 /lib64/
  3. ln -s /lib64/libstdc++.so.6.0.20 /lib64/libstdc++.so.6


If you now try and run PotreeConverter again you should receive the help information from the binary back similar to the below.

Potree Converter Install


This should be all the steps you need however if you have any problems or any errors different to mine please comment below and I will do my best to help.





Posted on  - By

The question of whether to go for a dedicated server or collocating your own server or hardware is a question that we are often asked here at VooServers and 90% of the time cost is the main driver for colocation. The other 10% would be made up of flexibility, compliance, ownership and the rather ambiguous ‘other’ category.

With cost being the main driver it’s a good time to reference back to a recent article about cost analysis using Costberg’s as this has a bearing on what we are discussing today.

We will discuss each of these points below but with cost being the main driver we will start there with some common misconceptions.

Cost (90%)

When looking at the cost it’s easy to see why people are driven towards colocation as at face value you see the one off purchase of an asset which is to last you 3 years and then lower monthly fees which over the 3 year period makes the apparent TCO (Total Cost of Ownership) much lower than a dedicated server of the same specification.

When you look at what a dedicated server offers though and try to bring a collocated server up to the same level of service, the value that a dedicated server offers quickly starts to outweigh colocation.

The first thing is hardware SLA’s. We build servers every day and stock all of the components that we use so should a replacement component ever be require it’s already on site and can be diagnosed and swapped out within 30 minutes 24 hours a day 7 days a week. If you were to look at getting this same type of agreement with HP or Dell it’s generally a 4 hour window and will cost you around £600 a year and you would still need to get one of your staff members to diagnose and arrange the replacement with Dell/HP which not only adds to the cost in terms of time of your engineer but also time that the service is down for.

Even if you are close to your colocation facility I can’t imagine many I.T. managers would want to be diagnosing a server fault at 11PM on a Friday night. With a dedicated server this is all taken care of by our staff without any additional cost to you. If you don’t have an in-house I.T. team or only have 1 or 2 members in I.T. then this starts to become more of a problem as business is gradually turning more and more towards 24×7 and the infrastructure that powers it needs to be available all the time and any issues need to be fixed without delay. In such a scenario we like to consider ourselves an extension of your I.T. team and to assist you in providing I.T. services to your company.

Flexibility (2%)

It’s a bit of a misconception that there is less flexibility with a dedicated server than there is with colocation. In fact, I’d probably say a dedicated server gives you more flexibility especially with a provider such as VooServers who specialise in providing custom dedicated server and can understand your needs and build something to your exact specifications.

With a dedicated server you can also choose one specification to start with and upscale as and when necessary either through upgrading individual components within the same server or by replacing the whole server altogether. Although upgrading components is also possible in colocation if you wanted to replace the entire server you would again have to pay out the entire cost of the server and in most circumstances that happens before the initial 3-year period that the server was expected to last and budgeted for expires.

Compliance (2%)

Compliance is something that has grown significantly as a concern over the past few years with more and more people becoming aware of offsite I.T. infrastructure possibilities. Many I.T. managers see that compliance is something that they need to handle entirely themselves however VooServers have experience with a number of different regulators such as PCI or ISO 27001.

With our experience we can be an extension of your I.T. team when it comes to compliancy rather than in co-location where you need to undertake the entire process yourself. Particularly in smaller companies this is a huge benefit as most companies these days want to be able to process payments online but don’t have the experience or budget to audit the process themselves.

Ownership (5%)

Ownership can be another tricky area but a question that is starting to decline as more and more of the large companies become more accepting of online services.

There are 2 main areas when it comes to ownership and those are assets and data. The asset side of things colocation gives you full ownership of the asset however with that in brings the caveats discussed earlier in cost and flexibility and you need to ask yourself whether owning that asset really offers you much benefit?

Data ownership on the other hand is a very big issue as it is what holds the actual value of the company. Without the data the business is nothing and it’s paramount to protect the ownership of that and this is what drives a number of people towards co-location as there are no questions around the ownership and it’s clear to demonstrate to other senior members of staff as that’s what everyone has been used to for the past 15 years.

There are a number of online services which can be vague on this topic and how they may use your data however at VooServers in our terms of service we explicitly set out that the ownership of the data remains your own under all circumstances and that we will not use that in any way other than to provide you with your own service giving you the same ownership as a collocated server would offer.

‘Other’ (1%)

The other category isn’t something we can easily cover here as it covers almost anything that we haven’t already discussed and that’s where I invite you to give us a call on 0800 0803 200 or e-mail us on sales@vooservers.com with your question and we will get back to you with a personalised response.

Summary

I hope to have demonstrated in this article that colocation really doesn’t offer you any benefits over a dedicated server and my own personal opinion would be that going for a dedicated server gives you a lot more options and better value for money. Of course you might argue that would always be my response as we are a dedicated server provider however you will often find the same answers from our customers.





Posted on  - By

You might wonder why on an I.T. blog we are asking whether you are prepared for winter when it’s a question normally reserved for your boiler man or garage.

Winter brings with it the possibility of a number of different adverse weather conditions whether that be snow, high winds or flooding and with that the interruption to business also starts to become a bigger possibility. It’s estimated that disruption from snow costs the UK economy over 500 million pounds a day with early reports of the recent floods in Cornwall alone reported at 6 million.

Here at VooServers we are specialists in data centre based solutions that can play a key part in keeping your workforce operational during unforeseen circumstances.

The biggest and most important thing is to plan. It doesn’t have to be complicated, it doesn’t have to be costly, but it does have to exist! A plan can be as simple as who should notify the staff of the issue, who to redirect the phones to and what responsibilities people should undertake if they are not able to complete their normal line of work.

If you already work with VooServers you are probably aware that we offer Skype for Business, Microsoft Exchange and Microsoft SharePoint all of which can enable your business to remain fully operational with your users working from home, keeping them working in the same manner as they would when they are in the office to ensure the most productivity possible. It is important however to make sure that users have access to the software they will need to install and have clear instructions on how to connect to these services once the software is installed which should be a part of your plan.

If you don’t have existing access to these technologies, then talk to us today for a free no obligation audit and proposal.

Without these technologies the easiest and cheapest step you can take to enable your users to work from home is to allow them to connect remotely to their normal work PC’s. In most circumstances businesses already have the capability to do this with their existing setup, however it just needs to be enabled. A worst case scenario is a replacement router which start from under £100. If you are purely contingency planning, then this is probably the cheapest step to take but the efficiencies are not as great as the technologies mentioned above as it’s not the user’s normal way of working and when it’s required it still needs to be setup on each person’s home computer.

VooServers are able to offer a number of solutions not just to fulfil contingency plans but to enable greater working capabilities all of the time. Call us to today on 0800 0803 200 or e-mail us on sales@vooservers.com to chat to one of our engineers today about the possibilities.





Posted on  - By

When designing failover solutions we often look at recovery time objectives (RTO) and recovery point objectives (RPO) to determine what the requirements for failover are.

RTO is the time in which a recovery from a failure should take and for most companies we work with this is under 30 minutes. Given that an average diagnosis can take 10 minutes this only leaves a further 20 minutes at a maximum to fix the service and bring it back online.

RPO is the acceptable data loss during a recovery operation. So for instance if relying solely on a backup then your RPO might be 1 hour and the backup policy should be run every hour or less to comply with this. Note though that a backup/restore method often has significant impact on the RTO.

These 2 figures often work together to determine how the service should be designed as you can’t use a traditional point in time backup to achieve an RTO of 5 minutes but it may still achieve your RPO of 15 minutes. Likewise, you might have an RPO of 30 seconds but an RTO of 1 hour meaning that data has to be as current as possible but its not too important if it takes a while to recover providing its there.

Here at VooServers we work with a number of different technologies to design services that meet any RTO or RPO that is defined. We have a proven track record of delivering services across multiple sites with an RTO as low as 2 minutes and an RPO as low as 10 seconds.





Older Posts

 Download our Company Newsletter
© VooServers Ltd 2016 - All Rights Reserved
Company No. 05598156