Reliable hardware  -  Trained Staff

Dirty Cow: The latest Linux Kernel Privilege-Escalation Vulnerability

Horizontal White Line

You are here:  Support / Technical Blog

Posted on  - By

‘Dirty Cow’ may sound humorous and far strung from the world of IT Systems Security, but the truth couldn’t be more different. Gaining its name from a play on the acronym crafted from the Linux Kernel mechanism ‘Copy On Write’, Dirty Cow is the latest in a seemingly never-ending timeline of Linux Kernel exploits.

The theory is relatively simple, a malicious application will set up a race condition in order to be able to effectively modify a root owned file (executable or otherwise) when mapped into the personal memory space of a non-privileged user. These changes are then committed to storage by the Kernel.. Not ideal. TheRegister.co.uk explained the process perfectly:

The exploit works by racing Linux’s CoW mechanism. First, you have to open a root-owned executable as read-only and mmap() it to memory as a private mapping. The executable is now mapped into your process space. The executable has to be readable by the process’s user to do this.

Meanwhile, you repeatedly call madvise() on that mapping with MADV_DONTNEED set, which tells the kernel you don’t actually intend to use the memory.

Then in another thread within the same process, you open /proc/self/mem with read-write access. This is a special file that allows a process to access its own virtual memory as if it was a file. Using normal seek and write operations, you then repeatedly overwrite part of your own memory that’s mapped to the root-owned executable. The overwrite shouldn’t affect the executable on disk.

So now, your process has the read-only binary mapped in as a private read-only object, one thread is spamming madvise() on that read-only object, and another thread is writing to that read-only object. Writing to that memory object should trigger a CoW: the touched page of the executable will be altered only in the process’s memory – not the actual underlying root-owned file that’s mapped in.

However, due to the aforementioned bug, the kernel performs the CoW operation but then allows the process to write to the read-only mapped executable anyway. These changes are committed to disk by the kernel, which is bad news.
Whilst this exploit technically isn’t new (it’s been present in Kernel versions dating back to 2007), it has rocketed its priority and significance due to public acknowledgement in major bug trackers. Fully working code releases that make (malicious) use of this exploit are now circulating in infosec communities, ripe for misuse. Thankfully, most major distributions have already released patches to resolve the bug.

RedHat – https://access.redhat.com/security/cve/cve-2016-5195
Debian – https://security-tracker.debian.org/tracker/CVE-2016-5195
Ubuntu – http://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-5195.html

Linux Kernel creator and (still) key developer, Linus Torvalds, summarised the fix in his own release last week:

This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a (“Fix get_user_pages() race for write access”) but that was then undone due to problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”). In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better).
Read the full release here

Posted on  - By

In this guide, I show you how to install Postfix and PostFWD (Postfix Firewall Daemon), configure rate limiting for a specific recipient domain, and integrate PostFWD into Postfix.


PostFWD v1.0+ (we will install v1.3.5)
Postfix v2.5+ (we will install v2.6.6)
CentOS 6.x (we are working in 6.8 x64)
You may also need things such as nc (netcat), telnet, and various Perl modules (detailed later)

Install Postfix

Postfix is a strong, reliable and extremely common SMTP server. CentOS 6 comes preinstalled with Postfix, but to use PostFWD you need to ensure you are running a version higher than 2.5.

Find out using ‘rpm’:

[root@server]# rpm -qa | grep postfix

Or use ‘yum’:
[root@server]# yum info postfix

Once installed, if for some reason you were using sendmail as your default MTA (Mail Transfer Agent), you’ll need to change this to postfix using ‘alternatives’:
[root@server]# alternatives --set mta /usr/sbin/postfix

Check you are running a valid version of Postfix:
[root@server]# postconf mail_version
mail_version = 2.6.6

Ensure Postfix starts on a system reboot:
[root@server]# chkconfig postfix on

Configure Postfix

Configuring Postfix is a rather open ended task, and will depend on what you are using the SMTP server for. If you have come this far, you likely already have a Postfix configuration, or you are simply using it to relay mails for a specific application. Either way, you should look to set some of the most basic Postfix configuration options in ‘/etc/postfix/main.cf’:

myhostname = Set as the mail servers FQDN/hostname
mydomain = The domain name of the mail server
myorigin = Usually the same as $mydomain
inet_interfaces = Set to all to listen on all network interfaces
mydestination = $myhostname, localhost, $mydomain
mynetworks =, /32
relay_domains = $mydestination
home_mailbox = Maildir/

If you are relaying from a specific location/server, you will of course need to adjust how you do this. This How-To is not a Postfix/SMTP Server configuration guide. It is a PostFWD integration guide to Postfix.

Install PostFWD

PostFWD, or Postfix Firewall Daemon, is a daemonized process that acts as a check policy service for postfix. It has a customisable rule-set that it applies dynamically to any and all mail that Postfix sees, we’ll touch more on that later. It’s very powerful, and offers several mail handling features that would otherwise not be possible in Postfix alone (or any other MTA for that matter).

We need version 1.0 or higher, so grab the tarball from postfwd.org, and run through some initial setup steps:
[root@server]# cd /usr/local
[root@server]# wget http://postfwd.org/postfwd-1.35.tar.gz 
[root@server]# tar -xvzf postfwd-1.35.tar.gz
[root@server]# mv postfwd-1.35 postfwd
[root@server]# cp /usr/local/postfwd/etc/postfwd.cf /etc/postfix/
[root@server]# cp /usr/local/postfwd/bin/postfwd-script.sh /etc/init.d/postfwd
[root@server]# chkconfig postfwd on
[root@server]# service postfwd start

Woah there, it’s not that easy.. As the PostFWD documentation states quite adamantly, this will not work (or start) without a couple of Perl modules installed.

[root@server]# yum -y install perl perl-CPAN perl-prefork gcc

You’ll need to do the rest in ‘cpan’
[root@server]# cpan
cpan[1]> install Net::Server::Daemonize
cpan[1]> install Net::Server::Multiplex
cpan[1]> install Net::Server::DNS

Once all of the Perl modules (and Perl) are installed, it’d be a great idea to issue a yum update, and reboot the system. Now you are ready to continue and configure PostFWD.

In terms of configuration, the world is your oyster with PostFWD. As the name suggests, it is essentially a firewall for your mail server, it can allow, drop, defer, reject silently, rate limit, rule match by message character counts, body sizes, send frequency, or a combination of any number of these factors.. Want to stop users x, y and z from sending more than 200Mb’s worth of attachments in a 12 hour period? No problem.

In this specific example, we want to rate limit (rather aggressively) all outbound mail to a specific domain. Specifically we don’t want to be sending any more than 10 emails every 30 minutes. Mails sent after this limit is reached will get rejected permanently. Mails within that limit can send at any frequency (unlike the stock implementation of rate limiting within postfix itself, where 10 emails in 30 minutes limit would delay ALL mails, and send 1 mail every 3 minutes, sending ALL mails eventually. In this scenario, that is not helpful.)

Check everything’s working:

At this point it’s a good sanity prod to check if everything is up and listening on the ports you expect them to be. Use netstat to have a look at the two ports in question, you should see something strikingly similar to the below.

[root@server]# netstat -anpl | grep ':10040|:25'
tcp        0      0   *                   LISTEN      10181/postfwd.pid
tcp        0      0        *                   LISTEN      10278/master
tcp        0      0 :::25                       :::*                        LISTEN      10278/master

If you don’t see the above, it means one of both of the services are either not running, or not able to bind to their respective ports, check the services are running, check things like SELinux aren’t stopping applications from binding to ports, check messages or your other syslog locations for evidence of problems.

Configuring PostFWD:

Earlier on, you copied postfwd.cf into /etc/postfix. It’s time to configure that with your rules. We are going to be defining just one, to rate limit as described above, but you will likely want a lot more, and also a catch-all style rule to be able to match “everything else”. Remember that our example was built on a custom internal mail server that has one specific task to do.

In this example, the only parts of the pre-supplied postfwd.cf we keep are the following:
[root@server]# cat /etc/postfix/postfwd.cf
## Definitions
# Whitelists
## Ruleset
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)

# Whitelists
## Ruleset
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)

Note our rate limiting rule, the syntax is fairly straight forward. Define the recipient domain, give it the ‘rate’ action, and then tell it how many messages to limit, in what time frame, and then what triggered action happens if it is met. For us, we chose to reply with a 421 4.7.1 SMTP reply, thus rejecting the inbound RCPT command from the mail server.

Once you have your rule in place, check that PostFWD parses it correctly:
[root@server]# /usr/local/postfwd/sbin/postfwd -f /etc/postfix/postfwd.cf -C
Rule   0: id->"ratelimit001"; action->"rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)"; recipient_domain->"==;domain.com"


Trigger the rate limit manually to see how PostFWD replies to it:
PostFWD comes with a “sample request” file that you can pipe into PostFWD to see how it reacts to differing rules. Modify the following file enough to suit your rate limit criteria

Now throw that sample request at PostFWD using netcat (you may need to install this with ‘yum install nc’).
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample

The action “DUNNO”, although worrying at first, is actually the desired outcome. PostFWD doesn’t know what to do with the message, so it states “DUNNO” back to Postfix and lets the message pass. Keep firing that command until you hit your rate limit.

[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
action=421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.

BINGO! We hit the rate limit (I’ve excluded pointless command repetition from this guide). You can see that as soon as the rate limit is hit, PostFWD applies our own custom action that we set earlier. 421 4.7.1, message rejected. Now we just need to make that happen automatically, and with Postfix.

Integration with Postfix

The integration of PostFWD into Postfix is realtively simple. For this example, we are going to be adding PostFWD as a check_policy_service server for postfix to look up against. As we are specifically filtering on the recipient domain, I am going to add this to the “smtpd_recipient_restrictions” section of Postfix. This section may or may not exist already in your Postfix’s main.cf.

Open /etc/postfix/main.cf and add or amend the following:
smtpd_recipient_restrictions =
       check_policy_service inet:
       reject_unauth_destination = 3600

The key to note here, is that the check_policy_service is ABOVE items such as permit_mynetworks. For us, localhost is a trusted net (see the config earlier on), our mails that we wish to rate limit are also from localhost, so if permit_mynetworks comes first, the messages would be forever passed and sent, as Postfix would never bother checking with PostFWD via the check_policy_service (it stops processing after a successful OK reply).

And that’s it.. Restart postFWD, and then restart Postfix (PostFWD should always be up before Postfix), and you’re good to go. Rate Limit events are logged to /var/log/maillog, along with all other successful or not mail operations. You’ll want to tail this log for a while to see if anything’s going wrong.


A nice and controlled way of testing with actual mail is to telnet into Postfix from the system itself.
Connected to
Escape character is '^]'.
220 mailtest1.vooservers.com ESMTP Postfix
HELO mail.domain.com
250 monitoringtest.vooservers.com
MAIL FROM: test@domain.com
250 2.1.0 Ok
RCPT TO: test@domain.com
250 2.1.5 Ok
354 End data with <CR><LF>.<CR><LF>
message goes here
250 2.0.0 Ok: queued as 5BECA21C21
221 2.0.0 Bye
Connection closed by foreign host.

This connects to the SMTP server (postfix), HELO’s as a mail server, defines a FROM: address, defines and TO: address, inputs some message body data, and then quits after the message is queued in postfix. Everything in yellow is text you have to type in.

You can repeat this until you hit your rate limit, tail the maillog in another screen whilst you do this, you’ll see Postfix happily relay all the mail up until you hit your defined rate limit, PostFWD will then step in and reply with the 421 message back to your telnet session. You’ll never get a chance to input a TO: address or any message body data. Perfect.


So to recap, we:
  • Installed Postfix and set it as the systems default MTA
  • Configured the basics of Postfix just to get it to function in a primal MTA state
  • Installed PostFWD
  • Configured and tested rate limiting rules in PostFWD
  • Integrated PostFWD with the recipient check stage of Postfix

The possibilities with PostFWD are extremely numerous, I’d recommend anyone embarking on this to check out the full documentation of both Postfix and PostFWD. Something that proved invaluable to me at times during our configuration and testing of this (and multiple other PostFWD instances).


Posted on  - By

If you have one or many MySQL Replication Slaves, you may need a handy way to monitor each slaves’ status within your existing Nagios Monitoring Platform. This handy NRPE based bash script will help you out…

# SQL Binary Replication Failure Detection      #
# Dave Byrne @ VooServers Ltd                   #
#Is the Slave IO Running?
slaveio=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_IO_Running | awk '{ print $2 }'`
#Is the Slave SQL Running?
slavesql=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_SQL_Running | awk '{ print $2 }'`

#Pull the Last SQL Error just in case
lasterror=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Last_Error | awk -F : '{ print $2 }'`
#Work out if its failed or not..
if [ "$slavesql" = "No" ] || [ "$slaveio" = "No" ];
  #Its failed, go CRITICAL
  echo "Slave IO Running? ... "$slaveio
  echo "Slave SQL Running? ... "$slavesql
  echo "Last SQL Error:  "$lasterror
  echo "CRITICAL - MySQL Replication Failure!"
  exit 2
  #Its good, go OK
  echo "OK - MySQL Replication Running"
  echo $slavesql
  exit 0


  • Enter your MySQL Root users password where applicable.
  • If either the Slave IO or the Slave SQL stops running, the check will return CRITICAL in Nagios.
  • Does not require SUDO action, run straight from nrpe.cfg

Posted on  - By

To make use of the JSONB features implemented in 9.4, it may be required that you upgrade your existing PgSQL 9.3 cluster to 9.4+. I cover the basics on how to perform an in-place upgrade.

  • 1. Add the PostgreSQL repo to apt:

    echo "deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main" > /etc/apt/sources.list.d/pgdg.list

  • 2. Install the repo’s key:

    wget -q -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add

  • 3. Update apt sources and install postgresql-9.4:

    apt-get update && apt-get install postgresql-9.4 && pg_lsclusters

  • 4. You will now have two pgsql clusters, your existing 9.3 one and the new default 9.4 one. We don’t need the 9.4 one, so we can drop it:

    pg_dropcluster --stop 9.4 main && pg_lsclusters

  • 5. Use pg_upgradecluster to perform an in-place upgrade of your 9.3 cluster:

    pg_upgradecluster 9.3 main && pg_lsclusters

  • 6. You will be left with a single, upgraded 9.4 cluster.

Posted on  - By

Utilising a master/slave (hot-standby) setup to provide a resilience layer at database level can be easy. The following assumes you have 2 PgSQL hosts at and, both running Ubuntu 14.04 LTS and PostgreSQL 9.4 (9.4.5).

  • 1. On the master, edit the following in postgresql.conf:

    listen_addresses = '*'
    wal_level = hot_standby
    max_wal_senders = 3

    listen_addresses can also be scoped down to single or multiple server bound IP addresses, for added security/best practice

    wal_level defines what type of data, and how much of it is written to/stored in the Write Ahead Log. Setting to hot_standby tells PgSQL to write all the data that would have been written with “archive” mode, plus the data needed to reconstruct the status of running transactions.

    max_wal_senders defines the number of process to use (max) to send replication data to the slave. This can be fine-tuned for your DB load and network capacity.

  • 2. On the master,, edit the following in pg_hba.conf:

    host	replication		all		trust

    This entry allows the slave to communicate back to the master, but only for replication based tasks.

  • 3. On the slave,, edit the following in postgresql.conf:

    hot_standby = on

  • 4. On the slave,, create a new configuration file named “recovery.conf” and add the following:

    standby_mode = 'on'
    primary_conninfo = 'host='

  • 5. We now need to sync the DB data from the master to the slave, so they can begin at the same point. Your mileage may vary with this, but the rsync command that would work in this scenario is the following, note the excludes, these are important, don’t sync those:

    rsync -av -e "ssh -p 22" --exclude pg_xlog --exclude postgresql.conf /var/lib/postgresql/9.4/main/* root@

  • 6. Once the sync has completed, start the Slave DB, once up, start the Master DB. Replication will now be in effect.

Posted on  - By

When it comes to dedicated servers, choosing an Operating System to suit your needs is crucial. Here at VooServers we offer a variety of custom setups, but by far the most common requests at setup time are for the “Famous Five”. That is, Windows Server 2008 (R2), Windows Server 2012 (R2), CentOS (6.x/7.x), Debian and Ubuntu. This quick rundown will be just the resource you need if you’re on the fence about one or the other.


Linux Logo
From the five OS’s mentioned, 3 of them are Linux based (or at least *nix core based). Linux OS installs are by far the most popular for server deployments and it’s easy to see why. Low resource overheads, unparalleled stability and vastly reduced licensing costs (often NONE). For the sake of these overviews, we’ll be looking at the non-GUI, server-core installations.


The “go-to” Linux OS for many. Praised for its simplicity, this Linux OS is a popular choice for the fact it is built around, and entirely based on, RHEL (Red Hat Enterprise Linux). It is almost 100% binary compatible with the RHEL Cores. That fact alone opens up a lot of flexibility with packages and software installs, but negates the need for a costly RHN (Red Hat Network) update/support license.

Stability/Server Features: 3 out 0f 5
Ease of Use: 3 out of 5


Debian Logo
Another very popular OS choice. Debian embodies the epitome of server stability. And has been a prominent Server OS for nearly 20 years. This unparalleled stability is traded off with usability, and Debian is often criticised for being slightly too cumbersome. It’s often compared negatively to RHEL, but this is typically by users who are not fully familiar with Debian’s operations. Another point of note, as of the Debian Squeeze release around 2011, all software packages bundled and installed with the OS are free software, prior to this, certain packages required extra purchases.

Stability/Server Features: 4 out of 5
Ease of Use: 2 and a half out 0f 5


Ubuntu is the modern spawn from a collaboration between the Debian Linux Kernels, and a for-profit organization named Canonical. As a server OS it is reliable, but unnecessary packages to aid user experience often become the undoing to this stability. Certain aspects of the OS, such as the installer, how the OS implements ‘sudo’, and its package manager mean that Ubuntu is remarkably easy to use – at least compared to its Debian father. Users of Ubuntu often compliment the level of support given by the technical communities, with it being such a new and upcoming OS, the interest and activity level is high.

Stability/Server Features: 3 out of 5
Ease of Use: 4 out of 5


Windows Logo
The remaining two Operating Systems are Windows based. In many applications, there’s simply no alternative than to have a globally recognisable and usable GUI, product support at the touch of a button and the most widely developed-for software system in the world. Of course, the trade-off here is cost. Licensing is a serious consideration when planning out your deployment. As much as you’d love the ease of an MS GUI, can your endeavour justify the rather large cost of Windows Licensing?

Windows Server 2008 R2

The “go-to” choice of many. Core in the industry for many years, the support of 2008 R2 has been hard for Microsoft to shift over onto the 2012 range of Operating Systems. Built on a Windows 7 Kernel and Core, it’s no nonsense GUI and rock solid stability are a force to be reckoned with in the server world. The only problem is, these days, there are some technical limitations that you should consider… 2008 R2 caps Physical Memory at 1Tb, and if you’re using it as a Virtualization Host, the VHD file format for virtual disks is capped at 2Tb. If operating in a Cluster, you can only have 16 2008 R2 Nodes. If you’re planning a large scale deployment, or Virtualized Applications that plan to use a lot of disk space, these should be taken into account, and traded off against 2008’s massive support base, bug free nature and no-frills “just works” GUI.

Stability/Server Features: 3 out of 5
Ease of Use: 4 out of 5

Windows Server 2012 R2

2012 R2 is built on a Windows 8 Core (or rather an 8.1 Core). Released late 2012 it addresses many of the limitations imposed by 2008 R2, Physical Memory for example, is now capped at 4Tb. Hyper-V now uses the VHDX file format, increasing the disk limit to a whopping 64Tb. And for you clustred-computing guys out there, you can have up to 64 2012 R2 nodes with a max of 8,000 VM’s! The downside, in our opinion, is that 2012 R2 has unfortunately ported across most of the 8.1 GUI. That is, the metro interface, app screen, and start button. In a server environment, when precision is key, and fluidity of tasks dictates your daily workflow, I can see no reason to have a full featured metro interface on a server. Even areas such as Task Manager, and Control Panel, are greatly cumbersome to use in a rush.

Windows Server 2016 is soon to be released (Technical Preview already under testing). This is built on a Windows 10 Core, and will address the interface issues inherited by 2012.

Stability/Server Features: 4 out of 5
Ease of Use: 3 out of 5

Posted on  - By

Windows 10, the source of much controversy over the last 6months or so, is finally upon us, and has been for a solid month or two now. Released officially on June 29th 2015, the first few machines of users who opted in to the free upgrade process began to take the plunge. I take a look at 10’s myriad of positives, pitfalls and cast a view point on whether Microsoft are onto a winner or not…

Windows 10 Logo
The Good

Task View.
Yes, the addition of a “Mac like” Exposé/Mission Control window peek feature. This one I like a lot, a quick tap of Windows Key + Tab will spring your 10 desktop into life and display each open application in a handy easy to view minified group view. This scales seamlessly across multiple physical monitors too, on my office station I currently have 3 monitors, each heavily populated with application windows. Pro-Tip: Mapping the keyboard strokes to a spare macro button on your mouse really speeds this up.

The Start Menu.
It’s back! Ok now hear me out on this one. A lot of people swear by the metro interface of 8 and 8.1, and were early adopters from the first versions of Windows 8. The claims were that it was much quicker to find certain settings areas or applications by using the metro interfaces search functions. I agree, it may have been quicker to find, but having Metro shut off your view of any open apps and your task bar, on all monitors, whilst it did this, was such a massive hindrance to your workflow in a business environment, that it killed any hint of productivity that you might have had going at the time. And don’t get me started about the location of the shutdown/Reset buttons! For me, the return of a semi-traditional start menu layout, which doesn’t disrupt your desktop view when you open it, was critical for the success of Windows 10. Kudos to Microsoft on the integration of Metro Tiles into an otherwise unused space.

Windows 10 Desktop
Boot-up Time.
Restarts in windows are a necessity sometimes, whether it be to apply those pesky updates, or simply because your work machine that’s been up for 162days is starting to bog down a little bit… Getting back up and into your desktop is better if it happens as quickly as it can. Again taking my fairly solid work machine as a benchmark, I’ve timed this using extremely high tech scientific instruments (A Samsung Galaxy S5) to a fraction over 9 seconds. This is with an enterprise level Intel SSD as boot, and only timed to the login prompt (as our domain logon would add precious unfair seconds). So to summarise, speedy, yes, good.

The Bad

Windows Updates.
At the time of writing this piece, I’m going to go ahead and give Microsoft the benefit of the doubt and credit them with the assumption that Windows Update, is simply not finished. Firstly, Microsoft have found the need to ‘hide’ Updates in the most illogical place, and to make matters worse, have left no breadcrumb to where they’ve put it. Naturally, you’d type “Update” into the search box, nope. Nothing. Okay, well it’s in Control Panel usually, so I’ll head there, nope. Nothing. Hmm. Turns out it’s hidden in the “All Settings” section of the notification panel that pops out of the right hand side of the screen. Why? And furthermore, why didn’t it come up in the search results for “Update”. Poor usability. Secondly, once you’ve managed to find and Launch Windows Updates, you’re greeted with a stripped down Metro App style interface, personal gripes aside, there simply isn’t the level of control in this interface that there needs to be. You have 200 updates to apply to a new install system? Ok, that’s fine, but you can’t de-select a single one of them. You have to install them all, and then go in and uninstall what you didn’t want afterwards from Programs & Features. Not cool. The final gripe about updates (and yes I’m aware a lot of this can be affected in GPO’s etc.) forced reboots at off peak times, or scheduled reboots within the next 4 days. Nope. No thank you. You do not have permission to reboot my machine at 3.30am, ever. And forcing me to pick a time in the next 4 days ONLY to force a reboot gets you a free ticket on the train to disabling the Windows Update Service.

Windows 10 Updates
My data is mine, which may seem like a silly statement, but it seems it needs to be re-iterated again and again. It’s mine, all of it, and I don’t want any of it being needlessly transmitted back to Redmond HQ. By default, if you don’t delve into the hidden options sections in the 10 install process, you’ll be sharing a lot more than the odd tracking cookie from a dodgy website with our pals over in the marketing team at Microsoft. Speech input, pen input, calendar details, contact information, geographic location and raw URL browser history are all openly shared and transmitted back to Microsoft at the drop of a hat. Along with the staggering misuse of trust of openly sharing your unique advertising ID with 3rd parties, you’d be excused for thinking that someone was pulling your leg? Nope. All of this is enabled by default in the Windows 10 installation procedure. You can disable it, but you’ll need super sharp eyes to catch the “Customize Settings” link at the bottom of one of the non-descript install screens. The good news is that you can turn everything off within the OS as well, so don’t fret too much if you did miss it. This sort of sharing of information is ok, if you want to help Microsoft improve its services, and you don’t think the data you’re transmitting is particularly security critical. For an enterprise user, working with customers’ entire company infrastructures daily. Leaking this sort of data is a crippling security flaw. These sorts of things should be offered as a default-disabled option, not enabled and hidden from the non tech savvy users.

The Summary

As a hard-core enterprise user of Windows 7, I was dead against adoption of the previous efforts from Microsoft. 8 and 8.1 fell very short of what they were meant to be. To me it seemed like they used it simply as an exercise in practicing how to get the Metro Interface to work on the desktop environment. They were slow, clunky, poorly thought out, and just a downright chore to use on a daily basis. 10 has taken a fresh look at Metro and has condensed its best bits into the smallest impacting footprint they can in the newly restored 10 Start Menu. Taking myself as a benchmark, I believe this will win over a large number of the hard-core 7 supporters, as it has me. Coupled with the fancy new Multiple Desktops, Task View, Notification Panel and many other features, I do truly think that Microsoft have the basis of an OS that will become the new go-to/de facto standard of Enterprise desktop installations. That being said, I do think they are still missing a few tricks. Windows Update, is simply not in a finished state, and needs a complete overhaul. The mismatch of where some settings applications are, and why they’re not in Control Panel (EVERYTHING should be in control panel, no matter where else it is) is a mystery to me, and again smacks of “unfinished”-ness.

As we’re only a few months into 10, I’m willing to give it the benefit of the doubt and state that, YES, in fact Windows 10 could very well be a game changer. Certainly if the game is to win over the old-school 7 users, and tempt across the lazy 8 and 8.1 users. Windows 10 has great promise, Microsoft just need to finish it 😉

Posted on  - By

If you or your company provide virtual servers within a Xen Virtualisation Environment, then it’s probably safe to say that you’ve run into Network overuse or misuse in the past, on one or more of your Hyper-Visors. Troubleshooting this and finding the VM responsible can be a tricky one, as many control panels don’t report live virtual interface data. (And even if they did, you can’t connect to it during a large scale attack!).

We’ve compiled a few of the simplest, and most direct ways of pinpointing exactly which pesky VM is the cause. The only thing you need to have installed? Sysstat.

Network Misuse or Overuse (Inbound or Outbound Attacks)

If your network graphs alert you to network spikes, or suspicious activity, such as either bursting or sustained high PPS (packets per second) then you could have an attack on your hands. With budget VM’s being so cheap and attainable, and instant deployment pretty much the norm, it makes sense for malicious 3rd parties to use them as staging platforms to participate in traditional traffic based DDoS and other common reflection based attacks.

If the attack is large enough, you will struggle to connect to your Hyper-Visor over the network. So physical access may be required for this one.

The following command will give you a solid overview of the network use, per interface. This includes the virtual interfaces bound to your VM’s:

sar –n DEV 1 3


This command uses sar. Sar is a handy tool that collates and displays various pieces of data from system activity counters, and can also be used to display in more useful ways, the contents of binary data files containing system performance history

  • -n – Reports the network statistics
  • DEV – Targets specifically the network devices
  • 1 – Interview between re-polling sar
  • 3 – Number of times to poll sar before averaging the results
Running the command should garner you something along the lines of this:

Troubleshooting Xen Virtual Machine Network

The above is largely normal, if you excuse the odd marginally high traffic level. The first 4 columns are what should be of interest to you, receive and transmit packets per second, and receive and transmit kB/s. If a VM is attacking, or being attacked, these values will usually all be in the 100,000’s. It will become hard to read the specific values, as the columns merge together.

The virtual interfaces are nicely named with the VM ID included. So this immediately tells you the unfortunate target or the unscrupulous attacker. However, you still don’t know the IP Address. And with the attacks ongoing, you still can’t log in to the friendly web GUI to suspend the VM.

The following command can help. There may also be times when you simply don’t want to shut the VM down, but you do want to stop the attacks at network level Lets assume you want the IP of vm1686.

find / -name vm1686.cfg -exec grep “vif” {} \;


This uses a typical find command, but is combined with the –exec switch for added functionality.

  • / – Start search in the root
  • -name – Search by full file name
  • vmXXX.cfg – Substitute the VM ID into here
  • -exec grep “vif” {} /; – This executes a simple grep command on every result find, and places the filename of the found result after the grep parameter.
Tip: You could even go further with this and awk it to cut down on the un-needed information |awk ‘{ print $3 }’

The output of the above should give you something that looks like this:

Troubleshooting Xen Virtual Machine Network

From there, you can block/blackhole/nullroute the IP as you please, without having to shutdown the VM, and without ever needing to access your Hyper-Visors web GUI.

Posted on  - By

In the world of hosted virtualisation environments, disk misuse (or overuse) will be an all too common issue that you can face day to day. Often you may be left guessing as to which of the many VM’s on a Hypervisor is responsible.

We’ve compiled a few of the simplest, and most direct ways of pinpointing exactly which pesky VM is the cause. The only thing you need to have installed? Sysstat.

Physical/Underlying Disk Over-Utilisation:

If you monitor your disk IO levels (which you should) you may be alerted to certain disks having critically high levels of input/output utilisation. If this is the case, use the following.

iostat -d -x -k 5 3

  • -d – Show the disk report (excludes the CPU report)
  • -x – Shows extended stats (the useful ones like %util and io queue size)
  • -k – Displays values in kB/s rather than in blocks/s (easier to understand the output)
  • -5 – Wait 5 seconds before re-polling iostat for new figures
  • -3 – Poll iostat 3 times, and then average the results
This will give you an output something like what’s shown below:

Troubleshooting Xen Virtual Machine Disk IO Over-Utilisation on the Hyper-Visor

As you can see, this node is running fine right now. In the far left column, you can see the device names, it lists the physical underlying disks as well as the virtual devices attached to the VM’s. The high IO culprit will be instantly visible, the disk causing the problems (often times in pairs, as each VM has an image and a swap), will usually have very close to 100% “&util”, and the various read and write columns will have tell-tale high numbers being reported. Use the above as a reference of a healthy, functioning, production level Hypervisor.

But wait, I hear you shouting… What good is a dm-x virtual device name? How can I resolve that to the VM name/number? Good question, see below for the next command to make use of.

lvdisplay|awk ‘/LV Name/{n=$3} /Block device/{d=$3; sub(“.*:”,”dm-“,d); print d,n;}’

Explained: I won’t dissect this fully, as it is a heavily awk’d lvdisplay. But in brief, it uses lvdispaly, which contains all of the information you need, but picks out the important information. It starts by pulling the 3rd value of the “LV Name” line, which is the VM’s logical volume name, this includes the VM ID which you can use to locate the VM later. It then pulls the block device number from the 3rd value of the “Block Device” line and appends that to a piece of text “dm-“ to make it a bit more readable.

The output will be something like this:

Troubleshooting Xen Virtual Machine Disk IO Over-Utilisation on the Hyper-Visor

It is now very easy to tie together the suspect dm-x device you found earlier, to a much more useful VM ID. If you want a quick fix, issue:

xm reboot vmXXX

This will gracefully reboot the VM if it is still responding.

xm console vmXXX

This will open the VM’s console, where you can either see what’s going on, or login and stop any processes you deem unruly

xm terminate vmXXX

This will force a shutdown on the VM.

Posted on  - By

The importance of server location..

With stronger links in network transport, and high speed travel more available, having servers deployed in multiple geographical locations is now easier than ever. With that in mind, you could ask why you might need a server somewhere other than where you are personally based?

The primary reason a server is located in a specific physical location is due to latency. Latency can be defined as the amount of time it takes for you in one location to send a request to the remote server, and receive a valid response back from it. A lower latency means it takes less time for that request to bounce back from the server, and can be translated very literally into how quickly a server can respond to requests and serve content. As you can probably tell, having as lower latency as possible is not only desirable, but in some cases it’s a requirement. Custom backend systems and VoIP systems for example, rely very heavily on low latency connections to avoid data-loss or voice-garble on data or digital voice communications. The main way to reduce the latency, is to lower the number of device hops between the user and the server. The less physical hops, the lower the latency should be. A shorter physical distance to where the server is located, means less hops. If your users are located in Germany for example, it may be best to position your server in a Germany-based datacentre, even though you as the administrator may very well be based in the US or the UK

Another key reason why geographical server location might be important is that of SEO, and localised results or biasing. If you are targeting a product or service at a certain geographical demographic, key search engines such as Google and Bing will add considerable weight to your listings in certain areas if the content being served is physically located in the same area (or at least the same country).

Our Datacentres..

We have datacentres in the US (New York), Germany (Frankfurt) and in the UK (Kent). These locations were chosen primarily for their strong global core network links. Overall speed and availability of interconnects to different transit providers are the main reasons for us creating our point of presence in these countries.

We perfectly placed to sculpt and provide extremely low latency routes for clients in most of mainland Europe, through the use of our Frankfurt presence, but also through the use of dedicated Layer 2 links between our UK and US facilities means we can offer extremely direct and efficient routes across the Atlantic Ocean. This lends itself very nicely to failover applications, something we pride ourselves on providing to clients.

With the technical talk out of the way, we also chose these locations as they are nicely spaced out geographically. We can provide services locally to many areas of the globe, and due to the physical distance between them all, even in the event of a nationwide network connectivity problem, only a small part of our network would ever be affected.

If you would like more information on geographical failover or other services in any of our facility locations, please Contact Us.

Older Posts

© VooServers Ltd 2016 - All Rights Reserved
Company No. 05598165