Dave Byrne, Author at VooServers
Reliable hardware  -  Trained Staff

How To: Configure Multiple VLAN Interfaces In SolusVM (KVM)

Horizontal White Line

You are here:  Support / Technical Blog

Posted on  - By

There may be times when you wish to give VM’s on one of your SolusVM nodes access to IP resources that are segmented into discrete VLAN’s at network level. If this is the case, you need to create network bridge interfaces on the node, and attach VLAN interfaces to them. This guide shows how I accomplished this.

  1. Configure the physical interface that is supplying the node with the VLAN’d traffic, in this example, we have trunked eno2 with vlan’s 220 and 221, as we have a group of VM’s that require to be able to bind IP’s within these VLANs’.

    [root@solus-node01]# cat ifcfg-eno2

  2. Configure your VLAN alias interfaces, note that we designate each interface to its own new bridge interface, this is a required step.

    [root@solus-node01]# cat ifcfg-en02.220
    [root@solus-node01]# cat ifcfg-eno2.221

  3. Configure your bridge interfaces.

    [root@solus-node01]# cat ifcfg-br2
    [root@solus-node01]# cat ifcfg-br1

    At this point, if you want the host node to also have an IP within these VLAN’s, you would bind it to the bridge interface directly, you can use the usual IPADDR, PREFIX, GATEWAY etc to achieve this.

  4. Bring all new interfaces up.

    [root@solus-node01]# ifup eno2.220
    [root@solus-node01]# ifup eno2.221
    [root@solus-node01]# ifup br2
    [root@solus-node01]# ifup br1

  5. Check the state of your bridges.

    [root@solus-node01]# brctl show
    <some info redacted>
    br1          8000.0cc47xxxxxxx       no          eno2.221
    br2          8000.0cc47xxxxxxx       no          eno2.220

    Note you should see your 2 new bridges with the relevant VLAN alias interface attached to it, You will also have at least one other bridge (br0) however this has been removed from the output above to simplify things.

    Now that you have bridges available, you can begin assigning these to VM’s that need access to it. In my case, I had to use KVM Custom Config in SolusVM to be able to A) Specifiy the right bridge and B) Create a second interface inside the VM.

  6. Custom config for a sample VM.

    <domain type='kvm'>
       <type machine='pc'>hvm</type>
       <boot dev='hd'/>
       <boot dev='cdrom'/>
     <clock sync='localtime'/>
        <graphics type='vnc' port='xxxx' passwd='xxxxxxxx' listen=''/>
        <disk type='file' device='disk'>
         <source file='/dev/vg_xxxxxxxx/kvmXXX_img'/>
         <target dev='hda' bus='virtio'/>
        <disk type='file' device='cdrom'>
         <target dev='hdc'/>
        <interface type='bridge'>
         <source bridge='br1'/>
         <target dev='kvmXXX.0'/>
         <mac address='00:16:3c:xx:xx:xx'/>
        <interface type='bridge'>
         <source bridge='br2'/>
         <target dev='kvmXXX.1'/>
        <input type='tablet'/>
        <input type='mouse'/>

    Note that this is heavily edited, the main focus is the duplicate “interface” section, and that the duplicate has no MAC address specified (important). You can also see that br1 and br2 have been specified. Make a mental note of which one is which so that in your VM, you can assign IP’s in the relevant VLAN.

    Save the custom config and reboot the VM. Assign IP’s manually once booted into the VM.

  7. Checking your bridge status now should show the VM interface active within it.

    [root@solus-node01]# brctl show
    <some info redacted>
    br1         8000.0cc47axxxxxx       no          eno2.221
    br2         8000.0cc47xxxxxxx       no          eno2.220

You can see more of our tutorials written by our own Technical engineers here.

Posted on  - By

What is it?

The security conscious among you will be well versed in the technicalities of Intel MicroCode exploits such as Spectre and Meltdown, affecting Intel Core, Celeron, Pentium, Xeon and even Atom CPU’s (along with a whole host of AMD based chips). Ever keen to keep Intel’s Security team on their feet, researchers from Belgium, Israel, the USA and Australia have discovered an exploit within intel’s SGX instruction set. On the 14th August 2018, Intel released information regarding this new variant of side channel cached data exploit known as “Foreshadow”, a Layer 1 data cache exploit with the ability to render guest VM data readable to other guests in a virtualised platform that makes use of SGX extensions on an intel CPU.

There’s a difference however, this time, L1TF Foreshadow (referred to as L1TF here on out) only affects Intel CPU’s using SGX, and SGX (Software Guard Extensions) is an instruction set only present on intel’s “Core” line-up of CPU’s. So that’s the old trust Core and Core2 ranges, along with the newer Core i3, i5, i7 and even i9 chips.

Does it affect you?

VooServers enterprise level infrastructure clients and those within our hosted virtual environments will be pleased to know that we do not make use of any “Core” chips from Intel. Our core service backbone, and our bespoke enterprise scenarios are comprised solely of Xeon CPU’s. As such, there is no scope whatsoever for data breaches utilising this exploit for customers within VooServers managed infrastructure.

(There may be a negligible quantity of unmanaged, custom dedicated server customers with aging, legacy hardware that could be affected, however these are not virtualisation environments and hence should pose no risk to customer data. If you feel you are affected by this, please reach out to our support team at support@vooservers.com)


Posted on  - By

Overview & Version Information:

I will be showing how to install and configure Oracle Fusion Middleware Golden Gate 12.3 for the purposes of data replication from a 12c instance on Oracle Linux 7, into an MSSQL Server 2014 Std instance on Windows Server 2016 of a full Oracle SCHEMA.

  • Oracle Golden Gate v12. (For Oracle Linux 7)
  • Oracle Golden Gate v12. (For Windows Server 2016)
  • Oracle Linux 7.4 (Kernel 4.1.12-112.14.2.el7uek.x86_64)
  • Oracle 12c (v12.
  • Windows Server 2016 (x64 Datacentre)
  • Microsoft SQL Server 2014 (Standard v12.0.5207.0)

We will be making use of EXTRACT and REPLICAT processes for the initial data load, and also utilising TRAIL’s, CDC and CDD to handle the live change data replication.

Throughout this article, Oracle Golden Gate will be referred to as OGG.

Installing OGG into Oracle Linux 7 (12c DB):

Head to https://edelivery.oracle.com and download the relevant OGG 12.3 DLP, at time of writing, is “V975837-01.zip”. Transfer this zip file onto a convenient location on your OL7 server.

<<< On the SOURCE SERVER >>>

On OL7, create the staging directory, and prepare by installing readline wrapper:

[root@shell]# mkdir /stage
[root@shell]# mv /path/to/zipfile.zip /stage/
[root@shell]# yum –y install readline readline-devel
[root@shell]# cd /stage
[root@shell]# wget ftp://ftp.pbone.net/mirror/download.fedora.redhat.com/pub/fedora/epel/7/x86_64/Packages/r/rlwrap-0.42-1.el7.x86_64.rpm
[root@shell]# unzip V975837-01.zip
[root@shell]# yum install rlwrap-0.42-1.el7.x86_64.rpm

Setup aliases in OL7 for GGSCI and SQLPLUS:

[root@shell]# su -l oracle
[oracle@shell]# nano ~/.bashrc

# Aliases for GoldenGate
alias sqlplus="rlwrap sqlplus"
alias ggsci="rlwrap ./ggsci"

[oracle@shell]# . .bashrc %% alias
[oracle@shell]# mkdir /u01/app/oracle/product/ogg_src

NOTE: You may change the directory name created above, it must be within your oracle installations product directory, but you may name it whatever you wish. On later installations, I suffixed the directory with the version number (ogg_src_12-3).

Run the OGG installer:

Connect to the console of the server, VM Console if virtualised, or physical KVM console if using a dedicated system. You need to run the next steps in a graphical environment. This guide assumes you have a functioning X server or other compatible desktop environment to use.

Log on as your Oracle user, open a Terminal window:

[oracle@shell]# cd /stage/fbo_ggs_Linux_x64_shiphome/Disk1
[oralce@shell]# ./runInstaller

The graphical OGG installer will now start. Follow the on screen instructions.

Select 12c when prompted.

Your details here may differ to the screenshot shown.

Software Location: The full working path to the ogg product folder that you created earlier
Start Manager: Checked (starts manager as automatic Linux server)
Database Location: The oracle DB Home location of your instance
Manager Port: I’ve used a slightly different port, you are welcome to use whatever you wish, but be sure to substitute it in later steps of the install.

Let the installer complete.

Done, installation is complete. We will now work on installing OGG into Windows Server 2016.

Installing OGG into Microsoft Windows Server 2016 Datacentre:

<<< On the TARGET SERVER >>>

Head over to https://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html and download the relevant version of OGG for Windows Server MSSQL. At time of writing it should be “Oracle GoldenGate for SQL Server (CDC Capture) on Windows (64bit) – 75Mb. Transfer the downloaded Zip to your MSSQL Server.

Create a new directory, for this example, we are using “C:/GoldenGate”, copy the contents of the extracted ZIP into the new directory.

Open an Administrator level, elevated command prompt, and change directory to the GoldenGate directory you created.

Run GGSCI and create the OGG subdirectories:

C:/Users/oggdba> cd C:/GoldenGate
C:/GoldenGate> ggsci.exe


Give the MGR process a custom name:




Install the OGG Manager as a service, with some options:

C:/GoldenGate> install.exe ADDEVENTS
C:/GoldenGate> install.exe ADDSERVICE
C:/GoldenGate> install.exe AUTOSTART
C:/GoldenGate> install.exe ADDEVENTS

Restart your windows system and verify the OGG MGR starts on boot, verify this with:


Create MSSQL Target Database, Schema, User and DSN:

This section will outline the basics of setting up the OGG Target DB and DSN, although this should be taken with some interpretation, use your own settings, permissions, naming schemes etc. as appropriate.

<<< On the TARGET SERVER >>>

Open SQL Server Management Studio, and create a new database to be used for storing your OGG replicated data set:

Create the new DB.

Name it something sensible.

In my experience, you MUST change the Collation (default character set) to “Latin1_General_BIN2”. Without this set, I usually run into issues trying to replicate certain Unicode characters in fields in the source DB.

Create SCHEMA within new DB:

Right click on your new DB, and select “New Query”, type:


NOTE: “SCHEMA1” must be the name of your source SCHEMA that you are replicating.

Create the new User, and give SCHEMA ownership to user:

Right click “Security” in the SQL Instance branch (not within the Database), and select New Login.

Ensure SQL Server Authentication is used, and set a secure password. Select your recently created DB as the users default DB, and choose “British English” as the users default language.

Within “User Mapping”, check the DB you just created, and ensure “db_owner” is selected. Take this opporunity to ser the default SCHEMA to the SCHEMA you created earlier.

Create System DSN for use by OGG:

Open Control Panel, Administrative Tools, and open “ODBC Data Sources (64bit)”. Change tab to “System DSN” and click the ADD button.

Select “ODBC Driver 11 for SQL Server”, name your DSN something logical and simple, in this example “oggrepldsn”, select the local SQL Server instance from the drop down. Ensure you select to use SQL Server Authentication. Check the box to connect to SQL to obtain additional settings, use the user you created earlier.

On the next screen, change the default DB to the DB created earlier. Leave everything else untouched. And finish the DSN Wizard.

Configuring GGSCI and Preparing for Initial Data Load

<<< On the SOURCE SERVER >>>

Verify the manager is running OK:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ggsci


[Here you may add any additional manager options you want, by default, you only need the PORT parameter]


[Verify the manager is running, you may also use START or STOP MGR]

Create Schema TRANDATA

GGSCI> DBLOGIN USERID <schema-user-here>
Password: <user-pass-here>

Substitute “SCHEMA1” for your schema you wish to replicate.

NOTE: Use of “ADD TRANDATA” only adds TRANDATA for the tables specified by your selection after it. If you add new tables after this is generated, new tables will have no TRANDATA, and therefore will not be able to be replicated until TRANDATA has been added. This is fine for me and this example, however a more robust solution would be to use ADD SCHEMATRANDATA, which adds at schema level, rather than table level, and new tables within the schema, are automatically included in the TRANDATA.

Verify that the TRANDATA is added OK:


Create source table definition parameters:


DEFSFILE /u01/app/oracle/product/ogg_src/dirdef/<filename-here>.def, PURGE 
USERID <oracle-user> PASSWORD <oracle-user-password>

Substitute a relevant .def file name into DEFSFILE parameter, you’ll need to use this later.

NOTE: In my example, I exclude some tables that I know I am not going to need in my replication. You may or may not want to do this. Be aware that you cannot generate definitions for externally organized tables (if you’re using them).

Generate the source table definitions using DEFGEN:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ./defgen paramfile dirprm/defgen.prm

This creates the .def file within ./dirdef/

The generated *.def file now needs to be transferred to the TARGET SERVER, and placed within $INSTALL_DIR/dirdef/

Configure Initial Data Load EXTRACT

These steps configure the initial load groups that will copy source data and apply it to the target tables.

<<< On the SOURCE SERVER >>>

Add the initial data load EXTRACT batch task group:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ggsci


NOTE: EINI9001 is created from the following format EINI<unique ID, max 4 digits>

Verify the EXTRACT created with the following:


Configure the initial data load EXTRACT PARAM file:


-- GoldenGate Initial Data Capture
USERID <oracle schema user here>, PASSWORD <oracle schema password here>

<<< On the TARGET SERVER >>>

Add the initial data load REPLICAT batch task group:


-- GoldenGate Initial Data Load Delivery 
TARGETDB oggrepldsn, USERID oggrepluser, PASSWORD <SQL user password here>
DISCARDFILE ./dirrpt/RINI9001.txt, PURGE 
SOURCEDEFS ./dirdef/<definition-file-name-from-earlier>.def OVERRIDE

INTERLUDE – Getting to this point in the guide assumes you have created the relevant tables/DDL in your target MSSQL database. OGG EXTRACT and REPLICAT processes will not create tables for you within MSSQL, it expects them to be there to insert into on REPLCAT. There is no agreed method of how best to do this. Me personally, I export DDL from SQL Developer, and then spend a lot of time pruning that output for JUST the CREATE TABLE and KEY statements. Of course, you’re then left with a lot of DDL statements that are only good for use within Oracle. You’ll need to convert them into SQL that MSSQL understands. There are many ways to do this, there are premium paid for 3rd party tools, and there are also free online tools such as SQLLines. You could also do it manually if you didn’t have many tables, although I wouldn’t recommend that.

<<< On the SOURCE SERVER >>>

Start the initial data load EXTRAC process:


View its progress with:


NOTE: There may be many errors to resolve on your first EXTRACT RUN, table names not existing, data type mismatches, column names not existing, permissions, network level restrictions such as firewalls etc.

Assuming the EXTRACT runs, REPLICAT will start on the TARGET SERVER, verify this, and its results, with the following on the TARGET SERVER:


If you have made it this far, you now have a DB in MSSQL with your Oracle data set in it, congrats! If that’s all you wanted, you can stop here, but most of the time, you will be aiming for live change data replication from Oracle. For this, we need to make use of a few more components of OGG.

Specifically, CDC and CDD. Change Data capture (via EXTRACT on SOURCE), and Change Data Delivery (via REPLICAT on TARGET). The next section explains how to do this.

Configuring Change Data Capture via EXTRACT

Through the use of trail files being shipped from SOURCE to TARGET, OGG can replicate changes in data detected at source (and written to the trail files). Here’s how to do that.

<<< On the SOURCE SERVER >>>

Add the EXTRACT group for CDC:


NOTE: “THREADS” is an integer of how many EXTRACT threads are maintained to read the differe4nt redo logs on the different Oracle Instance Nodes. If you are not running an Oracle Cluster, or RAC, then set this to 1, setting a higher value does not improve single instance performance.

Verify it created OK with:


Configure the EXTRACT group for CDC:


-- Change Capture parameter file to capture
USERID <sql-user-name>, PASSWORD <oracle-user-password>
RMTHOST <target-server-IP-address>, MGRPORT 7890
RMTTRAIL ./dirdat/1p

NOTE: The 2 character (max) identifier at the end of RMTTRAIL is important, make it unique, and remember it for later.

Create the GoldenGate Trail:


Verify that it created OK:


And verify the results:


Configuring Change Data Delivery via REPLICAT

The trail files defined earlier will be present on the TARGET server now, and they can be used by a CDD REPLICAT process to live replicate changed data from the TARGET.


Edit Global PARAMs and create the checkpoint table:

Create REPLICAT checkpoint group:


NOTE: The two letter prefix for EXTTRAIL is the same as earlier.

Configure REPLICAT PARAM file for CDD:


TARGETDB oggrepldsn, USERID oggrepluser, PASSWORD <sql-user-password>
SOURCEDEFS ./dirdef/1pmoracle.def

Start the REPLICAT process:


Verify it is running with:



Providing everything is running without issue, you are now finished, and you have a love replication scenario shipping data from Oracle 12c in Oracle Linux 7, into MSSQL 2014, in Windows Server 2016. This will continue to run all the time that you have the EORA and RMSS processes running. The initial data load EXTRACT and REPLICAT of EINI and RINI are redundant, unless you happen to ever want to drop your whole data set from MSSQL and have it replicated from scratch again.

Some of the above processes may seem simple, however documentation on a lot of it is few and far between, and when it can be found within Oracle Documentation, it is not often easy to interpret. In our testing, I was able to see change data appear in TARGET after altering it in SOURCE around 1second after committing in SOURCE.

Please feel free to reach out to me with any questions you may have. I can’t promise I can answer them all, but I will do my best to assist if I can.

Posted on  - By

‘Dirty Cow’ may sound humorous and far strung from the world of IT Systems Security, but the truth couldn’t be more different. Gaining its name from a play on the acronym crafted from the Linux Kernel mechanism ‘Copy On Write’, Dirty Cow is the latest in a seemingly never-ending timeline of Linux Kernel exploits.

The theory is relatively simple, a malicious application will set up a race condition in order to be able to effectively modify a root owned file (executable or otherwise) when mapped into the personal memory space of a non-privileged user. These changes are then committed to storage by the Kernel.. Not ideal. TheRegister.co.uk explained the process perfectly:

The exploit works by racing Linux’s CoW mechanism. First, you have to open a root-owned executable as read-only and mmap() it to memory as a private mapping. The executable is now mapped into your process space. The executable has to be readable by the process’s user to do this.

Meanwhile, you repeatedly call madvise() on that mapping with MADV_DONTNEED set, which tells the kernel you don’t actually intend to use the memory.

Then in another thread within the same process, you open /proc/self/mem with read-write access. This is a special file that allows a process to access its own virtual memory as if it was a file. Using normal seek and write operations, you then repeatedly overwrite part of your own memory that’s mapped to the root-owned executable. The overwrite shouldn’t affect the executable on disk.

So now, your process has the read-only binary mapped in as a private read-only object, one thread is spamming madvise() on that read-only object, and another thread is writing to that read-only object. Writing to that memory object should trigger a CoW: the touched page of the executable will be altered only in the process’s memory – not the actual underlying root-owned file that’s mapped in.

However, due to the aforementioned bug, the kernel performs the CoW operation but then allows the process to write to the read-only mapped executable anyway. These changes are committed to disk by the kernel, which is bad news.
Whilst this exploit technically isn’t new (it’s been present in Kernel versions dating back to 2007), it has rocketed its priority and significance due to public acknowledgement in major bug trackers. Fully working code releases that make (malicious) use of this exploit are now circulating in infosec communities, ripe for misuse. Thankfully, most major distributions have already released patches to resolve the bug.

RedHat – https://access.redhat.com/security/cve/cve-2016-5195
Debian – https://security-tracker.debian.org/tracker/CVE-2016-5195
Ubuntu – http://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-5195.html

Linux Kernel creator and (still) key developer, Linus Torvalds, summarised the fix in his own release last week:

This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a (“Fix get_user_pages() race for write access”) but that was then undone due to problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”). In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better).
Read the full release here

Posted on  - By

In this guide, I show you how to install Postfix and PostFWD (Postfix Firewall Daemon), configure rate limiting for a specific recipient domain, and integrate PostFWD into Postfix.


PostFWD v1.0+ (we will install v1.3.5)
Postfix v2.5+ (we will install v2.6.6)
CentOS 6.x (we are working in 6.8 x64)
You may also need things such as nc (netcat), telnet, and various Perl modules (detailed later)

Install Postfix

Postfix is a strong, reliable and extremely common SMTP server. CentOS 6 comes preinstalled with Postfix, but to use PostFWD you need to ensure you are running a version higher than 2.5.

Find out using ‘rpm’:

[root@server]# rpm -qa | grep postfix

Or use ‘yum’:
[root@server]# yum info postfix

Once installed, if for some reason you were using sendmail as your default MTA (Mail Transfer Agent), you’ll need to change this to postfix using ‘alternatives’:
[root@server]# alternatives --set mta /usr/sbin/postfix

Check you are running a valid version of Postfix:
[root@server]# postconf mail_version
mail_version = 2.6.6

Ensure Postfix starts on a system reboot:
[root@server]# chkconfig postfix on

Configure Postfix

Configuring Postfix is a rather open ended task, and will depend on what you are using the SMTP server for. If you have come this far, you likely already have a Postfix configuration, or you are simply using it to relay mails for a specific application. Either way, you should look to set some of the most basic Postfix configuration options in ‘/etc/postfix/main.cf’:

myhostname = Set as the mail servers FQDN/hostname
mydomain = The domain name of the mail server
myorigin = Usually the same as $mydomain
inet_interfaces = Set to all to listen on all network interfaces
mydestination = $myhostname, localhost, $mydomain
mynetworks =, /32
relay_domains = $mydestination
home_mailbox = Maildir/

If you are relaying from a specific location/server, you will of course need to adjust how you do this. This How-To is not a Postfix/SMTP Server configuration guide. It is a PostFWD integration guide to Postfix.

Install PostFWD

PostFWD, or Postfix Firewall Daemon, is a daemonized process that acts as a check policy service for postfix. It has a customisable rule-set that it applies dynamically to any and all mail that Postfix sees, we’ll touch more on that later. It’s very powerful, and offers several mail handling features that would otherwise not be possible in Postfix alone (or any other MTA for that matter).

We need version 1.0 or higher, so grab the tarball from postfwd.org, and run through some initial setup steps:
[root@server]# cd /usr/local
[root@server]# wget http://postfwd.org/postfwd-1.35.tar.gz 
[root@server]# tar -xvzf postfwd-1.35.tar.gz
[root@server]# mv postfwd-1.35 postfwd
[root@server]# cp /usr/local/postfwd/etc/postfwd.cf /etc/postfix/
[root@server]# cp /usr/local/postfwd/bin/postfwd-script.sh /etc/init.d/postfwd
[root@server]# chkconfig postfwd on
[root@server]# service postfwd start

Woah there, it’s not that easy.. As the PostFWD documentation states quite adamantly, this will not work (or start) without a couple of Perl modules installed.

[root@server]# yum -y install perl perl-CPAN perl-prefork gcc

You’ll need to do the rest in ‘cpan’
[root@server]# cpan
cpan[1]> install Net::Server::Daemonize
cpan[1]> install Net::Server::Multiplex
cpan[1]> install Net::Server::DNS

Once all of the Perl modules (and Perl) are installed, it’d be a great idea to issue a yum update, and reboot the system. Now you are ready to continue and configure PostFWD.

In terms of configuration, the world is your oyster with PostFWD. As the name suggests, it is essentially a firewall for your mail server, it can allow, drop, defer, reject silently, rate limit, rule match by message character counts, body sizes, send frequency, or a combination of any number of these factors.. Want to stop users x, y and z from sending more than 200Mb’s worth of attachments in a 12 hour period? No problem.

In this specific example, we want to rate limit (rather aggressively) all outbound mail to a specific domain. Specifically we don’t want to be sending any more than 10 emails every 30 minutes. Mails sent after this limit is reached will get rejected permanently. Mails within that limit can send at any frequency (unlike the stock implementation of rate limiting within postfix itself, where 10 emails in 30 minutes limit would delay ALL mails, and send 1 mail every 3 minutes, sending ALL mails eventually. In this scenario, that is not helpful.)

Check everything’s working:

At this point it’s a good sanity prod to check if everything is up and listening on the ports you expect them to be. Use netstat to have a look at the two ports in question, you should see something strikingly similar to the below.

[root@server]# netstat -anpl | grep ':10040|:25'
tcp        0      0   *                   LISTEN      10181/postfwd.pid
tcp        0      0        *                   LISTEN      10278/master
tcp        0      0 :::25                       :::*                        LISTEN      10278/master

If you don’t see the above, it means one of both of the services are either not running, or not able to bind to their respective ports, check the services are running, check things like SELinux aren’t stopping applications from binding to ports, check messages or your other syslog locations for evidence of problems.

Configuring PostFWD:

Earlier on, you copied postfwd.cf into /etc/postfix. It’s time to configure that with your rules. We are going to be defining just one, to rate limit as described above, but you will likely want a lot more, and also a catch-all style rule to be able to match “everything else”. Remember that our example was built on a custom internal mail server that has one specific task to do.

In this example, the only parts of the pre-supplied postfwd.cf we keep are the following:
[root@server]# cat /etc/postfix/postfwd.cf
## Definitions
# Whitelists
## Ruleset
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)

# Whitelists
## Ruleset
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)

Note our rate limiting rule, the syntax is fairly straight forward. Define the recipient domain, give it the ‘rate’ action, and then tell it how many messages to limit, in what time frame, and then what triggered action happens if it is met. For us, we chose to reply with a 421 4.7.1 SMTP reply, thus rejecting the inbound RCPT command from the mail server.

Once you have your rule in place, check that PostFWD parses it correctly:
[root@server]# /usr/local/postfwd/sbin/postfwd -f /etc/postfix/postfwd.cf -C
Rule   0: id->"ratelimit001"; action->"rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)"; recipient_domain->"==;domain.com"


Trigger the rate limit manually to see how PostFWD replies to it:
PostFWD comes with a “sample request” file that you can pipe into PostFWD to see how it reacts to differing rules. Modify the following file enough to suit your rate limit criteria

Now throw that sample request at PostFWD using netcat (you may need to install this with ‘yum install nc’).
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample

The action “DUNNO”, although worrying at first, is actually the desired outcome. PostFWD doesn’t know what to do with the message, so it states “DUNNO” back to Postfix and lets the message pass. Keep firing that command until you hit your rate limit.

[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
action=421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.

BINGO! We hit the rate limit (I’ve excluded pointless command repetition from this guide). You can see that as soon as the rate limit is hit, PostFWD applies our own custom action that we set earlier. 421 4.7.1, message rejected. Now we just need to make that happen automatically, and with Postfix.

Integration with Postfix

The integration of PostFWD into Postfix is realtively simple. For this example, we are going to be adding PostFWD as a check_policy_service server for postfix to look up against. As we are specifically filtering on the recipient domain, I am going to add this to the “smtpd_recipient_restrictions” section of Postfix. This section may or may not exist already in your Postfix’s main.cf.

Open /etc/postfix/main.cf and add or amend the following:
smtpd_recipient_restrictions =
       check_policy_service inet:
       reject_unauth_destination = 3600

The key to note here, is that the check_policy_service is ABOVE items such as permit_mynetworks. For us, localhost is a trusted net (see the config earlier on), our mails that we wish to rate limit are also from localhost, so if permit_mynetworks comes first, the messages would be forever passed and sent, as Postfix would never bother checking with PostFWD via the check_policy_service (it stops processing after a successful OK reply).

And that’s it.. Restart postFWD, and then restart Postfix (PostFWD should always be up before Postfix), and you’re good to go. Rate Limit events are logged to /var/log/maillog, along with all other successful or not mail operations. You’ll want to tail this log for a while to see if anything’s going wrong.


A nice and controlled way of testing with actual mail is to telnet into Postfix from the system itself.
Connected to
Escape character is '^]'.
220 mailtest1.vooservers.com ESMTP Postfix
HELO mail.domain.com
250 monitoringtest.vooservers.com
MAIL FROM: test@domain.com
250 2.1.0 Ok
RCPT TO: test@domain.com
250 2.1.5 Ok
354 End data with <CR><LF>.<CR><LF>
message goes here
250 2.0.0 Ok: queued as 5BECA21C21
221 2.0.0 Bye
Connection closed by foreign host.

This connects to the SMTP server (postfix), HELO’s as a mail server, defines a FROM: address, defines and TO: address, inputs some message body data, and then quits after the message is queued in postfix. Everything in yellow is text you have to type in.

You can repeat this until you hit your rate limit, tail the maillog in another screen whilst you do this, you’ll see Postfix happily relay all the mail up until you hit your defined rate limit, PostFWD will then step in and reply with the 421 message back to your telnet session. You’ll never get a chance to input a TO: address or any message body data. Perfect.


So to recap, we:
  • Installed Postfix and set it as the systems default MTA
  • Configured the basics of Postfix just to get it to function in a primal MTA state
  • Installed PostFWD
  • Configured and tested rate limiting rules in PostFWD
  • Integrated PostFWD with the recipient check stage of Postfix

The possibilities with PostFWD are extremely numerous, I’d recommend anyone embarking on this to check out the full documentation of both Postfix and PostFWD. Something that proved invaluable to me at times during our configuration and testing of this (and multiple other PostFWD instances).


Posted on  - By

If you have one or many MySQL Replication Slaves, you may need a handy way to monitor each slaves’ status within your existing Nagios Monitoring Platform. This handy NRPE based bash script will help you out…

# SQL Binary Replication Failure Detection      #
# Dave Byrne @ VooServers Ltd                   #
#Is the Slave IO Running?
slaveio=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_IO_Running | awk '{ print $2 }'`
#Is the Slave SQL Running?
slavesql=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_SQL_Running | awk '{ print $2 }'`

#Pull the Last SQL Error just in case
lasterror=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Last_Error | awk -F : '{ print $2 }'`
#Work out if its failed or not..
if [ "$slavesql" = "No" ] || [ "$slaveio" = "No" ];
  #Its failed, go CRITICAL
  echo "Slave IO Running? ... "$slaveio
  echo "Slave SQL Running? ... "$slavesql
  echo "Last SQL Error:  "$lasterror
  echo "CRITICAL - MySQL Replication Failure!"
  exit 2
  #Its good, go OK
  echo "OK - MySQL Replication Running"
  echo $slavesql
  exit 0


  • Enter your MySQL Root users password where applicable.
  • If either the Slave IO or the Slave SQL stops running, the check will return CRITICAL in Nagios.
  • Does not require SUDO action, run straight from nrpe.cfg

Posted on  - By

To make use of the JSONB features implemented in 9.4, it may be required that you upgrade your existing PgSQL 9.3 cluster to 9.4+. I cover the basics on how to perform an in-place upgrade.

  • 1. Add the PostgreSQL repo to apt:

    echo "deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main" > /etc/apt/sources.list.d/pgdg.list

  • 2. Install the repo’s key:

    wget -q -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add

  • 3. Update apt sources and install postgresql-9.4:

    apt-get update && apt-get install postgresql-9.4 && pg_lsclusters

  • 4. You will now have two pgsql clusters, your existing 9.3 one and the new default 9.4 one. We don’t need the 9.4 one, so we can drop it:

    pg_dropcluster --stop 9.4 main && pg_lsclusters

  • 5. Use pg_upgradecluster to perform an in-place upgrade of your 9.3 cluster:

    pg_upgradecluster 9.3 main && pg_lsclusters

  • 6. You will be left with a single, upgraded 9.4 cluster.

Posted on  - By

Utilising a master/slave (hot-standby) setup to provide a resilience layer at database level can be easy. The following assumes you have 2 PgSQL hosts at and, both running Ubuntu 14.04 LTS and PostgreSQL 9.4 (9.4.5).

  • 1. On the master, edit the following in postgresql.conf:

    listen_addresses = '*'
    wal_level = hot_standby
    max_wal_senders = 3

    listen_addresses can also be scoped down to single or multiple server bound IP addresses, for added security/best practice

    wal_level defines what type of data, and how much of it is written to/stored in the Write Ahead Log. Setting to hot_standby tells PgSQL to write all the data that would have been written with “archive” mode, plus the data needed to reconstruct the status of running transactions.

    max_wal_senders defines the number of process to use (max) to send replication data to the slave. This can be fine-tuned for your DB load and network capacity.

  • 2. On the master,, edit the following in pg_hba.conf:

    host	replication		all		trust

    This entry allows the slave to communicate back to the master, but only for replication based tasks.

  • 3. On the slave,, edit the following in postgresql.conf:

    hot_standby = on

  • 4. On the slave,, create a new configuration file named “recovery.conf” and add the following:

    standby_mode = 'on'
    primary_conninfo = 'host='

  • 5. We now need to sync the DB data from the master to the slave, so they can begin at the same point. Your mileage may vary with this, but the rsync command that would work in this scenario is the following, note the excludes, these are important, don’t sync those:

    rsync -av -e "ssh -p 22" --exclude pg_xlog --exclude postgresql.conf /var/lib/postgresql/9.4/main/* root@

  • 6. Once the sync has completed, start the Slave DB, once up, start the Master DB. Replication will now be in effect.

Posted on  - By

When it comes to dedicated servers, choosing an Operating System to suit your needs is crucial. Here at VooServers we offer a variety of custom setups, but by far the most common requests at setup time are for the “Famous Five”. That is, Windows Server 2008 (R2), Windows Server 2012 (R2), CentOS (6.x/7.x), Debian and Ubuntu. This quick rundown will be just the resource you need if you’re on the fence about one or the other.


Linux Logo
From the five OS’s mentioned, 3 of them are Linux based (or at least *nix core based). Linux OS installs are by far the most popular for server deployments and it’s easy to see why. Low resource overheads, unparalleled stability and vastly reduced licensing costs (often NONE). For the sake of these overviews, we’ll be looking at the non-GUI, server-core installations.


The “go-to” Linux OS for many. Praised for its simplicity, this Linux OS is a popular choice for the fact it is built around, and entirely based on, RHEL (Red Hat Enterprise Linux). It is almost 100% binary compatible with the RHEL Cores. That fact alone opens up a lot of flexibility with packages and software installs, but negates the need for a costly RHN (Red Hat Network) update/support license.

Stability/Server Features: 3 out 0f 5
Ease of Use: 3 out of 5


Debian Logo
Another very popular OS choice. Debian embodies the epitome of server stability. And has been a prominent Server OS for nearly 20 years. This unparalleled stability is traded off with usability, and Debian is often criticised for being slightly too cumbersome. It’s often compared negatively to RHEL, but this is typically by users who are not fully familiar with Debian’s operations. Another point of note, as of the Debian Squeeze release around 2011, all software packages bundled and installed with the OS are free software, prior to this, certain packages required extra purchases.

Stability/Server Features: 4 out of 5
Ease of Use: 2 and a half out 0f 5


Ubuntu is the modern spawn from a collaboration between the Debian Linux Kernels, and a for-profit organization named Canonical. As a server OS it is reliable, but unnecessary packages to aid user experience often become the undoing to this stability. Certain aspects of the OS, such as the installer, how the OS implements ‘sudo’, and its package manager mean that Ubuntu is remarkably easy to use – at least compared to its Debian father. Users of Ubuntu often compliment the level of support given by the technical communities, with it being such a new and upcoming OS, the interest and activity level is high.

Stability/Server Features: 3 out of 5
Ease of Use: 4 out of 5


Windows Logo
The remaining two Operating Systems are Windows based. In many applications, there’s simply no alternative than to have a globally recognisable and usable GUI, product support at the touch of a button and the most widely developed-for software system in the world. Of course, the trade-off here is cost. Licensing is a serious consideration when planning out your deployment. As much as you’d love the ease of an MS GUI, can your endeavour justify the rather large cost of Windows Licensing?

Windows Server 2008 R2

The “go-to” choice of many. Core in the industry for many years, the support of 2008 R2 has been hard for Microsoft to shift over onto the 2012 range of Operating Systems. Built on a Windows 7 Kernel and Core, it’s no nonsense GUI and rock solid stability are a force to be reckoned with in the server world. The only problem is, these days, there are some technical limitations that you should consider… 2008 R2 caps Physical Memory at 1Tb, and if you’re using it as a Virtualization Host, the VHD file format for virtual disks is capped at 2Tb. If operating in a Cluster, you can only have 16 2008 R2 Nodes. If you’re planning a large scale deployment, or Virtualized Applications that plan to use a lot of disk space, these should be taken into account, and traded off against 2008’s massive support base, bug free nature and no-frills “just works” GUI.

Stability/Server Features: 3 out of 5
Ease of Use: 4 out of 5

Windows Server 2012 R2

2012 R2 is built on a Windows 8 Core (or rather an 8.1 Core). Released late 2012 it addresses many of the limitations imposed by 2008 R2, Physical Memory for example, is now capped at 4Tb. Hyper-V now uses the VHDX file format, increasing the disk limit to a whopping 64Tb. And for you clustred-computing guys out there, you can have up to 64 2012 R2 nodes with a max of 8,000 VM’s! The downside, in our opinion, is that 2012 R2 has unfortunately ported across most of the 8.1 GUI. That is, the metro interface, app screen, and start button. In a server environment, when precision is key, and fluidity of tasks dictates your daily workflow, I can see no reason to have a full featured metro interface on a server. Even areas such as Task Manager, and Control Panel, are greatly cumbersome to use in a rush.

Windows Server 2016 is soon to be released (Technical Preview already under testing). This is built on a Windows 10 Core, and will address the interface issues inherited by 2012.

Stability/Server Features: 4 out of 5
Ease of Use: 3 out of 5

Posted on  - By

Windows 10, the source of much controversy over the last 6months or so, is finally upon us, and has been for a solid month or two now. Released officially on June 29th 2015, the first few machines of users who opted in to the free upgrade process began to take the plunge. I take a look at 10’s myriad of positives, pitfalls and cast a view point on whether Microsoft are onto a winner or not…

Windows 10 Logo
The Good

Task View.
Yes, the addition of a “Mac like” Exposé/Mission Control window peek feature. This one I like a lot, a quick tap of Windows Key + Tab will spring your 10 desktop into life and display each open application in a handy easy to view minified group view. This scales seamlessly across multiple physical monitors too, on my office station I currently have 3 monitors, each heavily populated with application windows. Pro-Tip: Mapping the keyboard strokes to a spare macro button on your mouse really speeds this up.

The Start Menu.
It’s back! Ok now hear me out on this one. A lot of people swear by the metro interface of 8 and 8.1, and were early adopters from the first versions of Windows 8. The claims were that it was much quicker to find certain settings areas or applications by using the metro interfaces search functions. I agree, it may have been quicker to find, but having Metro shut off your view of any open apps and your task bar, on all monitors, whilst it did this, was such a massive hindrance to your workflow in a business environment, that it killed any hint of productivity that you might have had going at the time. And don’t get me started about the location of the shutdown/Reset buttons! For me, the return of a semi-traditional start menu layout, which doesn’t disrupt your desktop view when you open it, was critical for the success of Windows 10. Kudos to Microsoft on the integration of Metro Tiles into an otherwise unused space.

Windows 10 Desktop
Boot-up Time.
Restarts in windows are a necessity sometimes, whether it be to apply those pesky updates, or simply because your work machine that’s been up for 162days is starting to bog down a little bit… Getting back up and into your desktop is better if it happens as quickly as it can. Again taking my fairly solid work machine as a benchmark, I’ve timed this using extremely high tech scientific instruments (A Samsung Galaxy S5) to a fraction over 9 seconds. This is with an enterprise level Intel SSD as boot, and only timed to the login prompt (as our domain logon would add precious unfair seconds). So to summarise, speedy, yes, good.

The Bad

Windows Updates.
At the time of writing this piece, I’m going to go ahead and give Microsoft the benefit of the doubt and credit them with the assumption that Windows Update, is simply not finished. Firstly, Microsoft have found the need to ‘hide’ Updates in the most illogical place, and to make matters worse, have left no breadcrumb to where they’ve put it. Naturally, you’d type “Update” into the search box, nope. Nothing. Okay, well it’s in Control Panel usually, so I’ll head there, nope. Nothing. Hmm. Turns out it’s hidden in the “All Settings” section of the notification panel that pops out of the right hand side of the screen. Why? And furthermore, why didn’t it come up in the search results for “Update”. Poor usability. Secondly, once you’ve managed to find and Launch Windows Updates, you’re greeted with a stripped down Metro App style interface, personal gripes aside, there simply isn’t the level of control in this interface that there needs to be. You have 200 updates to apply to a new install system? Ok, that’s fine, but you can’t de-select a single one of them. You have to install them all, and then go in and uninstall what you didn’t want afterwards from Programs & Features. Not cool. The final gripe about updates (and yes I’m aware a lot of this can be affected in GPO’s etc.) forced reboots at off peak times, or scheduled reboots within the next 4 days. Nope. No thank you. You do not have permission to reboot my machine at 3.30am, ever. And forcing me to pick a time in the next 4 days ONLY to force a reboot gets you a free ticket on the train to disabling the Windows Update Service.

Windows 10 Updates
My data is mine, which may seem like a silly statement, but it seems it needs to be re-iterated again and again. It’s mine, all of it, and I don’t want any of it being needlessly transmitted back to Redmond HQ. By default, if you don’t delve into the hidden options sections in the 10 install process, you’ll be sharing a lot more than the odd tracking cookie from a dodgy website with our pals over in the marketing team at Microsoft. Speech input, pen input, calendar details, contact information, geographic location and raw URL browser history are all openly shared and transmitted back to Microsoft at the drop of a hat. Along with the staggering misuse of trust of openly sharing your unique advertising ID with 3rd parties, you’d be excused for thinking that someone was pulling your leg? Nope. All of this is enabled by default in the Windows 10 installation procedure. You can disable it, but you’ll need super sharp eyes to catch the “Customize Settings” link at the bottom of one of the non-descript install screens. The good news is that you can turn everything off within the OS as well, so don’t fret too much if you did miss it. This sort of sharing of information is ok, if you want to help Microsoft improve its services, and you don’t think the data you’re transmitting is particularly security critical. For an enterprise user, working with customers’ entire company infrastructures daily. Leaking this sort of data is a crippling security flaw. These sorts of things should be offered as a default-disabled option, not enabled and hidden from the non tech savvy users.

The Summary

As a hard-core enterprise user of Windows 7, I was dead against adoption of the previous efforts from Microsoft. 8 and 8.1 fell very short of what they were meant to be. To me it seemed like they used it simply as an exercise in practicing how to get the Metro Interface to work on the desktop environment. They were slow, clunky, poorly thought out, and just a downright chore to use on a daily basis. 10 has taken a fresh look at Metro and has condensed its best bits into the smallest impacting footprint they can in the newly restored 10 Start Menu. Taking myself as a benchmark, I believe this will win over a large number of the hard-core 7 supporters, as it has me. Coupled with the fancy new Multiple Desktops, Task View, Notification Panel and many other features, I do truly think that Microsoft have the basis of an OS that will become the new go-to/de facto standard of Enterprise desktop installations. That being said, I do think they are still missing a few tricks. Windows Update, is simply not in a finished state, and needs a complete overhaul. The mismatch of where some settings applications are, and why they’re not in Control Panel (EVERYTHING should be in control panel, no matter where else it is) is a mystery to me, and again smacks of “unfinished”-ness.

As we’re only a few months into 10, I’m willing to give it the benefit of the doubt and state that, YES, in fact Windows 10 could very well be a game changer. Certainly if the game is to win over the old-school 7 users, and tempt across the lazy 8 and 8.1 users. Windows 10 has great promise, Microsoft just need to finish it 😉

Older Posts

 Download our Company Newsletter
© VooServers Ltd 2016 - All Rights Reserved
Company No. 05598156