Posted on March 15th, 2019 - By Nick Stears
Passwords are notorious for being difficult to remember. Besides this they never seem to be secure enough which is always a concern when they are required for most online or retail transactions as well as any site that holds personal data of any kind.
I was fortunate enough to attend the Microsoft Ignite Tour at London’s Excel Centre this February just gone. It was readily apparent that Microsoft seem intent for technology to adopt innovate alternatives to passwords, opting for the use of biometrical data or the use of a remote pin code to login to their core services.
In the last few years forgotten passwords are a common issue as clients call up our support team having forgotten their password. This is usually because they have had to set it to something super secure with various amounts of letters, numbers and characters, not always easy to remember!
Having a strict secure password, is of course a good thing, compromised accounts are the number 1 source of data theft. Microsoft reported a staggering 81% of compromised accounts are either from stolen or weak passwords, so it’s important that they are set to something secure. However, even the strongest passwords are easily compromised and although user awareness campaigns may help to some extent, it’s not always the best route.
Microsoft for the past few years alongside other major organisation like Google have been pushing for enablement of MFA (Multi-Factor Authentication) which has definitely helped, reporting that MFA has eradicated 99% of all breaches.
Microsoft’s MFA involves two-steps of verification, adhering to the proven security concept of something you know (like a password), something you have (a trusted device like a phone) or something you are (biometric data like a fingerprint). This method has definitely been a success and is widely used worldwide, all of our staff here use MFA in one way or another. However, even with the addition of another layer of security via MFA, you still have to remember the same password on top of logging into the authenticator app/texting a code, and for some, it just adds another layer of stress and complication.
All things combined, Microsoft are pushing out three new ways to authenticate your account without the need for a password:
Windows Hello is an excellent replacement for passwords, and is ideal for personal PCs and laptops to easily and quickly authenticate an account using facial recognition.
There is currently a drawback though as this only really works for a single user at a single workstation, Microsoft report that they are “working hard on lighting up a series of personal credentials that are more suitable for such shared PC scenarios”. They also added that over 47 million users worldwide have used Windows Hello. This seems to be something that has gone down very well within the commercial sector.
Facial recognition is definitely a growing market, I use it myself on an iPhone as it has made logging in to the phone and various secure applications effortless in my experience.
Microsoft Authenticator is primarily used at present for Multi-Factor Authentication, however it is now also being utilised (and is in public preview) for password-less authentication.
To give a real quick demo how it works, you type just your email into the Microsoft online portal page, hit sign in, the authenticator app then displays a number which you must match up to the number on screen. To briefly showcase how this works you can use the quick demo shown below:
You will then be asked for one further piece of authentication on your mobile device, whether that be a pin, biometric with your fingerprint, or via facial recognition, depending on what your device supports.
Once you have provided that additional piece of information, your web browser automatically logs you in and you are all set, without the entering of any password, all very simple and effortless!
The final password-less offering from Microsoft which is gradually being rolled out worldwide is the use of FIDO2 keys. This is essentially a USB key with a biometric scanner on which allows you to login without a password by using your fingerprint. It does require your device to have at least the October update for Windows 10. Microsoft claim this tool is aimed at the ‘deskless user, giving the primary example of a Doctor.
This seems like it would be a great idea, however, this would mean you would always need to carry the device around with you which could be an issue if misplaced.
It was readily apparent that Microsoft are really pushing for a password-less journey, are you ready to begin yours ? Contact our sales team today on 01622 524200 or at firstname.lastname@example.org to find out more about how we can help your business and it’s security.
Posted on June 11th, 2018 - By Nick Stears
Hypertext Transfer Protocol, more commonly known via its acronym HTTP, is the method in which a website presents data and content from its own servers to your browser on any particular device. HTTP uses a basic text file that contains hyperlinks, which navigate through a website and provide information to load audio/visual and other content. HTTP has been the gold standard since the inception of the internet.
The fundamental issue with HTTP in the current climate is transparency, with sorely lacking privacy and security measures. As an example, eager hackers with little trouble, could access personal data / credit and debit card information by intercepting an online transaction a user has made through a website page, which could lead to a significant number of vulnerable customers. This is where HTTPS came in to provide a solution. If you happen across a website, using HTTPS It is most likely that the website employs an SSL Certificate. This authentication can be confirmed via a secure lock icon on the browser toolbar. An SSL Certificate encrypts data/ passwords etc. meaning any transactions or private data is very difficult for would be hackers to exploit business or consumer data.
HTTPS has now become the gold standard HTTP once was, especially in the current climate with increased data protection and how it helps ensure business and customer data is at a reduced risk of exploitation. Should your business not employ an SSL certificate and run on standard HTTP you will likely not only run the risk of data breaches of company information, you also send the message to customers that perhaps their best interests and security are not being met, effectively losing new and current customers in the process. Web browsers such as Google Chrome have already gone as far as showing a warning on a form submission, for example if the connection is not secure. This could potentially alienate further customers who would otherwise have been interested in a business’s products or services but become wary that the internet’s most popular search engine deems the website unsafe.
Here at VooServers we provide single SSL certificates, EV certificates, as well as Wildcard certificates. Wildcard certificates offer unlimited subdomain protection under a single domain, which depending on how many sub-domains you are securing can have a significant cost benefit as opposed to requiring a certificate for individual subdomains. However, it is worth noting that some of the advanced SSL features such as EV are not generally supported on wildcard certificates.
Installation is provided by VooServers technicians who also provide annual reminders to assure peace of mind that you have protective measures in place for both business data, as well as (and perhaps more importantly) your current and future customer base. VooServers can also assist in redirecting a HTTP site to HTTPS directing customers to the correct protocol. With increased data protection act regulations as well as GDPR coming into full force, now is the time to provide SSL certification for the business websites you present to the public.
For more information on SSL Certifications please contact our sales team at email@example.com or via telephone on (01622 524200) who will be happy to advise on our current pricing and certification options.
Posted on April 13th, 2017 - By Nick Stears
The hyper-converged market is rapidly growing in today’s climate; the market alone is worth upwards of one billion pounds and is thought to be growing at a rate of 150% every year. Those that do not understand what converged and hyper-converged infrastructure are I will try to explain in its simplest form.
With a non-converged infrastructure you have for example a virtualisation server (with Hyper-V/Xen/KVM etc), which would then connect to some form of data storage via direct attached storage (DAS), storage area network (SAN) or a network attached storage (NAS) device. The virtual machines’ disks are hosted completely separately from the virtualisation server. The storage device will have some form of raid configured and optimised for performance and redundancy, but the key is they are completely separate; you would generally connect multiple virtualisation servers to the storage array.
With a hyper-converged infrastructure, everything is rolled into one, disks are stored on the same server with a storage controller running as a service on each node (you need a minimum of two nodes) which means you can scale your cluster while also maintaining the redundancy and resiliency that a storage device gives you. Storage is then abstracted as a separate layer which is used to create virtual san’s within the same hardware, demonstrated in the picture below:
With Windows Server 2016, hyper-converged is made possible by a feature called Storage Spaces Direct. This technology allows multiple nodes within the same cluster to see each disk as if it was is its own (from every node available in the cluster). Storage Spaces Direct also makes sure that every disk is resilient, so there are at least 2 copies of the data split across multiple nodes. If there is a faulty node or faulty disk, the data from that disk is still intact elsewhere. Storage Spaces Direct acts as a storage controller, which replaces the need for physical hardware raid, although you can still use both for performance gains.
The amount of nodes you have in your cluster determines how resilient you can make your infrastructure and how efficient it is. In a simple 2-node cluster, which is the lowest entry point in this infrastructure (for obvious reasons) you would be limited to a two-way mirror, this allows for a complete failure of one node. To determine which machine is live in the event of a failure you do require an external witness server, this can be anything within your network outside of your cluster, but could also be externally cloud based. A witness server is used to make the deciding vote when a node fails, so if node 1 can not communicate with node 2, and visa versa, the external witness will have the final say as to which node should be active. It is used primarily in even node clusters to ensure that there is always a majority vote in the event of a hardware or network failure with one or multiple nodes. When setting up a 3-node cluster Microsoft recommend a 3-way mirror, this allows failure of one node as well as a failed disk on a second node simultaneously, so an extra layer of redundancy compared to the 2-node cluster, a witness server is not required for this setup as 3 nodes are used so there will always be a deciding vote. A 4-Node cluster allows for dual parity, which adds another layer of redundancy and is the recommended setup by Microsoft for optimal performance. There are also figures to suggest that a four Node-Cluster is 50% more efficient, an 8-Node is 66% more efficient and a 16-Node improves efficiency up to 80% with a full SSD configuration.
There are many other features under the hood that deserve an honourable mention. One example is that you can set priorities on Virtual machines. If you need to take one node down and you do not have the memory to fail all VM’s to a different node you can set different priority levels. What this means is that when a node is put into maintenance, it will always prioritise moving the VM with the highest priority to the next available node. If there is an insufficient amount of memory to move all of the virtual machines then those with a lower priority will pause until the original node is brought back online.
The health service which monitors the state of the drives in your nodes has also been improved over 2012 R2, if a disk fails for any particular reason the end user will be notified, the disk will be highlighted within the node and can be replaced and rebuilt without any intervention (other than physically replacing the disk).
This was just a small glimpse into hyper-convergence and we look forward to rolling this out in the coming months to many of you! If you are interested in this technology, feel free to contact us by email at firstname.lastname@example.org or call us on 0800 0803 200 for more information to discuss your requirements.
Posted on July 22nd, 2015 - By Nick Stears
In our industry, uptime is absolutely everything, we need to ensure services are online 100% of the time regardless of issues with network or hardware. To ensure we meet such high standards, when they are required, we use ARCserve HyperV High Availability and Assured Recovery to ensure we are never vulnerable to downtime. VooServers prides itself on having multiple points of presence, which gives us the flexibility of offering customers the option of additional failover to another site in the event of a failure at power or network level. The concept is relatively simple, both servers must be running the same operating system, have the ARCserve engine driver running and HyperV installed:
Posted on May 12th, 2015 - By Nick Stears
Backing up laptops and office computers for most companies is imperative, whether it be your accountant with important financial documents, your designer with important website content or your Managing Directors email data, it’s crucial that your data is safe and recoverable. It is equally important however for companies to do their bit for the environment and ensure workstations are switched off when not in use to cut down on power costs and energy. The main problem with this is when; laptop lids get shut, pc’s get set on standby, your precious data never likely gets a chance to back itself up properly. With Backup2Go none of this matters, your workstations will backup wherever they are, at whatever time, providing it has internet access and power. It can do all this without the need to begin a backup from scratch. With very little administration required, Backup2Go is software you can depend on.
One of the reasons we started to look at Backup2Go is it seamlessly supports Windows, Macs and Linux based operating systems, a must for offices who give their employees the choice of OS. Once the agent is downloaded and installed, a quick 5 minute setup to configure which server to back up to and you are ready to go. With Backup2Go the administration is a doddle. Once you have your P5 backup server setup and a template in place you can begin assigning agents to it. The configurable options on the templates are extremely useful:
When the backups are running, you can conveniently administrate all of your agents in the P5 overview control panel where a traffic light system is used to view the current status of all the running agents. This makes it clear to see how long ago it has been since the last successful backup or if there are any problems.
Restoring files is equally as simple and can be done on the user’s workstation or from the P5 administration panel. This takes away the hassle of having to contact your backup admin as users can simply login to the agent themselves and restore their files from any given time (displayed on the right of the screenshot below).
We have been very impressed with the flexibility Backup2Go offers, it allows us to manage backups of client’s laptops and workstations with the freedom of knowing that their data is being securely backed up regardless of where they are.
Posted on March 13th, 2015 - By Nick Stears
For any IT related company in ownership of IP space using Cisco hardware, it is usually good practice to assign specific subnets to clients in their own vlan, with IPv4 space becoming sparse this will usually be something as low as a /30 or even a /32. Although this makes administration a lot easier there are still companies that distribute entire /24 blocks in one vlan and distribute a few IP’s from this pool, trusting that they will only use the IP’s they have been assigned. This is all well and good however you are purely relying on trust here, and from time to time clients may attempt to take advantage of the situation and use additional IP’s that do not belong to them. There is also the scenario where you simply may have mistakenly assigned somebody the wrong IP address. Fortunately we can quite easily track the culprit port on the Cisco with a few simple commands.
The first step is to ping the IP in question from another IP within that subnet, this is important as it will show up in the arp table and display the mac address we require. Once you have sent a ping to the port in question, in windows command prompt type arp – a, in Linux simply just type arp. Arp stands for address resolution protocol and is used for resolution of network layer addresses into link layer addresses, here we will be able to see the IP we have just pinged and the mac address we have associated with it.
The next thing we need to do is login to our cisco devices, here we have logged onto our router and broke the 12 character mac address into 3 sections of 4 separated by a dot for Cisco to understand:
Posted on February 18th, 2015 - By Nick Stears
Here at Vooservers we use OwnCloud as a simple way of uploading and storing files on a server for ease of distributing direct link downloads to our clients, but as we begin to explore OwnCloud in more detail we found there’s a lot more under the hood then initially meets the eye.
Posted on January 23rd, 2015 - By Nick Stears
Keeping your data safe is imperative for most organisations, people say ‘you can never have enough backups’ and that maybe a little cliché however it really is genuinely a good ethos to adapt if you are concerned about data loss. Here at VooServers we use R1soft’s CDP as it gives us the flexibility and customisation needed to ensure we have everything backed up and stored in one centralised location.
There are three fundamental parts to CDP server, the server, the disk safe and the policy. Adding the server is a relatively painless task, it’s simply a matter of installing a small agent on the destination server (compatible with Linux, Windows amongst others), add a key to allow the two servers to authenticate and your all set.
Creating the disk safe is just as simple and allows you the flexibly of adding either a single disk, multiple disks, or automatically add all disks to the server. This is obviously quite handy if you are regularly adding new disks to a system as it saves a lot of administrative time.
The final and most crucial part of the procedure is adding and configuring the policies. This is where R1soft shines, simply select the server, select the disk safe and then customise the policy exactly how you want it. For instance you can set the backup frequency to daily, hourly, minutely and then set the amount of recovery points to how many days’ worth of backups you require. To give you an example if you backed up a server every hour and you wanted to hold the data for a month, you would set 744 recovery points (24 in a day X 31 days in a month). This allows you to restore from 744 different points in time over the space of a month, giving you ultimate flexibility, of course you can back up/retain more frequently if you prefer.
Other than the scheduling and retention periods, there are further customisations that can be made, for instance you can exclude certain files/directories from backups, you can backup individual databases via MySql/SQL add-ons, there is also an add-on to backup all Exchange instances or Cpanel accounts.
Let’s take a typical case study, you have a multiple user Cpanel server with a dozen or so SQL databases. A user makes an error and corrupts his SQL database, taking his website offline potentially losing revenue. With R1soft CDP we can simply select a restore point (usually a few moments before the error has been made), find the correct database, click restore and the database will be back to a working state within minutes! With this type of backup power at our disposal we can feel confident in the knowledge that any mistake can be put right with just a couple of clicks!