Dave Byrne, Author at VooServers
Reliable hardware  -  Trained Staff

#MSIgnite2019 – HyperV Roadmap, Features & Azure

Horizontal White Line

You are here:  Support / Technical Blog

Posted on  - By

Microsoft Ignite 2019 has seen many new announcements, and the HyperV Team were keen not to be left out. A mix of features for both Azure HyperV based VM’s and on-prem physical HyperV nodes/clusters were included.

Next Gen?

The team were very happy to announce that Gen2 HyperV VM’s are now available in Azure offerings, a number of advancements in the hyperv ecosystem meant that gen2 support was a critical requirement. In Azure, VM’s on pakcages such as Mv2 (massive scale), NVv4 (GPU enabled VMs), HBv2 (compute based VM’s) and LSv2 (Storage Optimized) all required Gen2 support to progress to where they are today.

Huge Scale VM’s in azure

Building off of Gen2 support, VM packages such as Mv2 can now be deployed with up to 416vCPU Cores and 12Tb of vRAM, numbers thata re not yet available in other cloud offerings. HBv2 VM’s, the compute operation optimized ones, can now make use of up to 80k compute nodes in a single instance!


GPU’s for Everyone!

You heard that right, the NVv4 package offers GPU accelerated environments in Azure, and this technology is also filtering down to on-prem/dedicated HyperV Nodes. The HyperV Team have worked to bring a new form of GPU Partitioning to HyperV, a single GPU can be partitioned and chunks of it can be presented to single VM’s as a complete physical pass-thru GPU. The Vm knows no different, and can use the GPU as if it was physical. This is exciting for the accelerated VDI market, and is sure to make waves after the shortcomings of RemoteFX.


A small gesture, but the HyperV Team recognise that application debugging around the HyperV stack hsa often been a tiresome chore. In an effort to make it more palettable, they have released the HyperV stack debugging symbols packages, so analyzing core and memory dumps in crashes etc is now a lot more meaningful.

Live Migration & Windows Admin Center

Windows Admin Center is swiftly becoming the crown jewel in Windows Server 2019+ environments. To that end, the HyperV Team have incorporated Live Migraton control and feedback between HyperV Nodes in Cluster right into Admin Center. This is available today, and in their roadmap, the plan is to eventually use Admin Center as the one-stop-shop for HyperV Sysadmin operation.


VM and CPU Grouping

Available now in HyperV is the ability group CPU’s in multi socketed physical environments. This presents a lot of optoins around segmenting high risk or public facing VM’s to one CPU set, and high data value VM’s to another. This is in an effort to further mitigate the impact of being vulnerable to CPU Microcode attacks such a Spectre and Meltdown. CPU groups have no knowledge of the other CPU groups, and comms between groups is prohibited.

All in all, exciting times to be involved in HyperV rollouts, both in Azure and on-prem/dedicated.

Posted on  - By

Teams vs SfB for Enterprise Voice?

It’s no secret that Microsoft are heavily pushing Teams in the enterprise collaborative workspace, pulling in Teams apps for MS Project, Planner, syncing with the 365 dataset etc. It’s also fairly well known that Teams does support a form of enterprise voice, more specifically “Cloud Voice” in Office 365.

Another point of controversial contention is that the meetings experience in Skype for Business rarely works very well, if there aren’t issues in hosting a meeting, there’s issues with external contacts joining a meeting. Microsoft have touched on this at ignite, and the weaknesses in SfB meetings are being addressed via Teams adoption.


Whilst this may be functional and appear attractive to new customers with no existing enterprise voice solution, it represents quite an obstacle to those organisations that already have an established Skype for Business (or Lync Server 2013) enterprise voice solution. Ignite 2019 has seen Microsoft flaunt their new “Meetings First” approach. Full Teams adoption isn’t feasible for a lot of organisations due to there currently being no logical migration path for SfB Enterprise Voice Topologies into Teams. Whilst MS works on addressing this, they are proposing most people to think “Meetings First”

Meetings First?

So what does Meetings First mean? In essence, it’s a staggered approach to Teams adoption. Microsoft have developed a deployment mode for Teams in which you can opt for “selected capabilities” for the Teams deployment. IT administrators can now choose for Teams to adopt only the collaborative and meetings features (channel chat, integration into other hosted applications such as BI or Projects, and meetings) while leaving IM chat and calling/enterprise voice to Skype for Business.


Posted on  - By

The 2nd full day at Microsoft Ignite saw a host of announcements and information surrounding Exchange Online and Microsoft’s new “modern” Exchange Admin Experience.

New EXO Exchange Online Cmdlets

Coming to GA soon is a collectoin of newly developed Exchange Online Powershell cmdlet’s (EXO). From the ground up built to exceed previous cmdlet performance several times over and streamline the work of the exchange admin staff. The new cmdlet’s see a 4-8x increase in execution efficiency, live demos at the Orange County Conference Center saw the cdlet’s process 10k Get-Mailbox requests un inder 1minute.


EAC Design & Efficiency Overhaul

The current EAC layout is both praised and loathed in equal amounts. But today Microsoft annonced a new version, poised to address some of the long standing gripes of the current EAC. – Actions for managing, creating and modifying user and shaed mailboxes have now been merged into a single easy to use action panel. – Properties for mailboxes are now even easier to view in an instant, simply clicking on any user mailbox, equipment mailbox, or room mailbox will slide out its properties pane in the same view. – The ability to bulk update mailboxes, of any type, has been imprived and made faster. – All mailbox lists are now able to be filtered on the fly, without reloading pages. Adding to the user experience and creating more efficient workflows.


G-Suite Migration Wizards Come To EAC

A key addition to the new modern EAC experience is the ability to connect to a G-Suite tenant and initiate a mailbox migration inbound into your Exchange Tenant. Not only does this seemlessly migrate and import the remote mailbox, but also caters effortlessly to importing the rmeote G-Suite accounts contacts and calendar items in the onboarding operation.

In previous versions, this utilised the Google IMPA connector, and was subject to a 2Gb per day rate limit when onboarding, something that was a killer to bringing in larger G-Suite clients to your Exchange tenant. In the new modern Exchange Admin experience, Exchange Migration wizard makes use of the G-Suite REST API to ake mailbox and user data calls to, thus bypassing the 2Gb daily limit. This breathes new life into G-Suite client onboarding opportunities.

Anothre massive improvement is the departure from the dated use of username/password to migrate G-Suite accounts into Exchange. Now, we make use of the API in G-Suite and generate a pre-auth’d JSON token file, the token file is securely uploaded to Exchange Migration Wizard for authenticaton into G-Suite.

To top it all off? All of this is bundled into a seamless GUI wizard, with minimal user input, a few next, next, next’s and super fast G-Suite inbound migration is possible.

Posted on  - By

Server 2019, initially released in early 2018, has seen several patches and version updates. A key offering in Microsoft’s OS lineup with strong hyperconvergance and software defined storage features. But this post focuses more on a few tools within Server 2019 that stole the limelight today at Microsoft Ignite 2019.

Windows Admin Center (version 1910) was announced today (Monday 4th November) at Microsoft Ignite, and brings with it a host of new features for the IT Pro to make use of. Able to be deployed to any Windows Server 2019 or Windows 10 Desktop installation makes it both versatile and accessible to support teams, or smaller orginisations. Not only that, but it can seamlessly connect to, monitor and manage all servers, virtual or physical from Server 2008 R2 and upwards.

MSIgnite 2019

Performance Monitor

A key new feature of Windows Admin Center sees a drastic overhaul to something that for a long time was a clunky pain to use for IT support staff, Performance Monitor! Once the slow and cumbersome tool that took far too long to unearth the information on performance counters you needed, is now a sleek feature integrated directly into Windows Admin Center. Adopting a very azure-esque aesthetic, the new version of Performance Monitor (or perfmon) breathes new life into a staple IT problem resolution tool.

Firstly, individual performance counters can now be searched for easily, rather than having to know the name of a counter, or manually locate it from a dauntingly large list. Once the counter is selected, Admin Center immediately begins graphing the metric in real time, allowing for another counter selection. Another plus point here is that all graphs generated by PerfMon in Admin Center are interactive, scalable, resizable and customizable.

Another cool new addition here is that once a single performance counter has been chosen, Windows Admin Center will autmoatically filter and suggest the list of counters available to related or relevant counters that match the chosen dataset. Helping to empower support teams and provide access to data relveant to the scenario.

Performance Monitor in Windows Admin Center v1910 also has the ability to save a set layout of graphs, charts and counters as a Workspace. Share that workspace with colleagues, or set as the instance defaul to provide instant overviews or in-depth analysis of all admin center tenant members.

MSIgnite 2019

Azure Hybrid & Hyper Converged Management

This is all great for on-prem servers, but the usefulness expands even further to hyperconverged clusters, legacy failover hyper-v clusters, and most interestingly, Azure Hybrid environments. VM’s and other compute resources running in Azure, a spart of an Azure hybrid Deployment, can be added into an on-prem (or Azure hosted) Windows Admin Center instance and managed just the same way.

Windows Admin Center v1910 brings with it powerful new technology to natively deploy a HyperConverged cluster on applicable hardware. It even offers seamless set up, in wizard form, for advanced Storage Spaces configurations. Deploying modern failover clusters has never been easier with Microsoft Windows Admin Center v1910.

Another very strong and related offering available here is Azure Arc. With an Arc deployment, you can now seamlessly apply Azure based management and control policies to assets not naturally contained within Azure. Link an on-prem hyper converged cluster to Azure Arc to benefit from things such as access control through RBAC, domain type deployment policies and more. More content on Azure Arc will follow later this week.

Posted on  - By

There may be times when you wish to give VM’s on one of your SolusVM nodes access to IP resources that are segmented into discrete VLAN’s at network level. If this is the case, you need to create network bridge interfaces on the node, and attach VLAN interfaces to them. This guide shows how I accomplished this.

  1. Configure the physical interface that is supplying the node with the VLAN’d traffic, in this example, we have trunked eno2 with vlan’s 220 and 221, as we have a group of VM’s that require to be able to bind IP’s within these VLANs’.

    [root@solus-node01]# cat ifcfg-eno2

  2. Configure your VLAN alias interfaces, note that we designate each interface to its own new bridge interface, this is a required step.

    [root@solus-node01]# cat ifcfg-en02.220
    [root@solus-node01]# cat ifcfg-eno2.221

  3. Configure your bridge interfaces.

    [root@solus-node01]# cat ifcfg-br2
    [root@solus-node01]# cat ifcfg-br1

    At this point, if you want the host node to also have an IP within these VLAN’s, you would bind it to the bridge interface directly, you can use the usual IPADDR, PREFIX, GATEWAY etc to achieve this.

  4. Bring all new interfaces up.

    [root@solus-node01]# ifup eno2.220
    [root@solus-node01]# ifup eno2.221
    [root@solus-node01]# ifup br2
    [root@solus-node01]# ifup br1

  5. Check the state of your bridges.

    [root@solus-node01]# brctl show
    <some info redacted>
    br1          8000.0cc47xxxxxxx       no          eno2.221
    br2          8000.0cc47xxxxxxx       no          eno2.220

    Note you should see your 2 new bridges with the relevant VLAN alias interface attached to it, You will also have at least one other bridge (br0) however this has been removed from the output above to simplify things.

    Now that you have bridges available, you can begin assigning these to VM’s that need access to it. In my case, I had to use KVM Custom Config in SolusVM to be able to A) Specifiy the right bridge and B) Create a second interface inside the VM.

  6. Custom config for a sample VM.

    <domain type='kvm'>
       <type machine='pc'>hvm</type>
       <boot dev='hd'/>
       <boot dev='cdrom'/>
     <clock sync='localtime'/>
        <graphics type='vnc' port='xxxx' passwd='xxxxxxxx' listen=''/>
        <disk type='file' device='disk'>
         <source file='/dev/vg_xxxxxxxx/kvmXXX_img'/>
         <target dev='hda' bus='virtio'/>
        <disk type='file' device='cdrom'>
         <target dev='hdc'/>
        <interface type='bridge'>
         <source bridge='br1'/>
         <target dev='kvmXXX.0'/>
         <mac address='00:16:3c:xx:xx:xx'/>
        <interface type='bridge'>
         <source bridge='br2'/>
         <target dev='kvmXXX.1'/>
        <input type='tablet'/>
        <input type='mouse'/>

    Note that this is heavily edited, the main focus is the duplicate “interface” section, and that the duplicate has no MAC address specified (important). You can also see that br1 and br2 have been specified. Make a mental note of which one is which so that in your VM, you can assign IP’s in the relevant VLAN.

    Save the custom config and reboot the VM. Assign IP’s manually once booted into the VM.

  7. Checking your bridge status now should show the VM interface active within it.

    [root@solus-node01]# brctl show
    <some info redacted>
    br1         8000.0cc47axxxxxx       no          eno2.221
    br2         8000.0cc47xxxxxxx       no          eno2.220

You can see more of our tutorials written by our own Technical engineers here.

Posted on  - By

What is it?

The security conscious among you will be well versed in the technicalities of Intel MicroCode exploits such as Spectre and Meltdown, affecting Intel Core, Celeron, Pentium, Xeon and even Atom CPU’s (along with a whole host of AMD based chips). Ever keen to keep Intel’s Security team on their feet, researchers from Belgium, Israel, the USA and Australia have discovered an exploit within intel’s SGX instruction set. On the 14th August 2018, Intel released information regarding this new variant of side channel cached data exploit known as “Foreshadow”, a Layer 1 data cache exploit with the ability to render guest VM data readable to other guests in a virtualised platform that makes use of SGX extensions on an intel CPU.

There’s a difference however, this time, L1TF Foreshadow (referred to as L1TF here on out) only affects Intel CPU’s using SGX, and SGX (Software Guard Extensions) is an instruction set only present on intel’s “Core” line-up of CPU’s. So that’s the old trust Core and Core2 ranges, along with the newer Core i3, i5, i7 and even i9 chips.

Does it affect you?

VooServers enterprise level infrastructure clients and those within our hosted virtual environments will be pleased to know that we do not make use of any “Core” chips from Intel. Our core service backbone, and our bespoke enterprise scenarios are comprised solely of Xeon CPU’s. As such, there is no scope whatsoever for data breaches utilising this exploit for customers within VooServers managed infrastructure.

(There may be a negligible quantity of unmanaged, custom dedicated server customers with aging, legacy hardware that could be affected, however these are not virtualisation environments and hence should pose no risk to customer data. If you feel you are affected by this, please reach out to our support team at support@vooservers.com)


Posted on  - By

Overview & Version Information:

I will be showing how to install and configure Oracle Fusion Middleware Golden Gate 12.3 for the purposes of data replication from a 12c instance on Oracle Linux 7, into an MSSQL Server 2014 Std instance on Windows Server 2016 of a full Oracle SCHEMA.

  • Oracle Golden Gate v12. (For Oracle Linux 7)
  • Oracle Golden Gate v12. (For Windows Server 2016)
  • Oracle Linux 7.4 (Kernel 4.1.12-112.14.2.el7uek.x86_64)
  • Oracle 12c (v12.
  • Windows Server 2016 (x64 Datacentre)
  • Microsoft SQL Server 2014 (Standard v12.0.5207.0)

We will be making use of EXTRACT and REPLICAT processes for the initial data load, and also utilising TRAIL’s, CDC and CDD to handle the live change data replication.

Throughout this article, Oracle Golden Gate will be referred to as OGG.

Installing OGG into Oracle Linux 7 (12c DB):

Head to https://edelivery.oracle.com and download the relevant OGG 12.3 DLP, at time of writing, is “V975837-01.zip”. Transfer this zip file onto a convenient location on your OL7 server.

<<< On the SOURCE SERVER >>>

On OL7, create the staging directory, and prepare by installing readline wrapper:

[root@shell]# mkdir /stage
[root@shell]# mv /path/to/zipfile.zip /stage/
[root@shell]# yum –y install readline readline-devel
[root@shell]# cd /stage
[root@shell]# wget ftp://ftp.pbone.net/mirror/download.fedora.redhat.com/pub/fedora/epel/7/x86_64/Packages/r/rlwrap-0.42-1.el7.x86_64.rpm
[root@shell]# unzip V975837-01.zip
[root@shell]# yum install rlwrap-0.42-1.el7.x86_64.rpm

Setup aliases in OL7 for GGSCI and SQLPLUS:

[root@shell]# su -l oracle
[oracle@shell]# nano ~/.bashrc

# Aliases for GoldenGate
alias sqlplus="rlwrap sqlplus"
alias ggsci="rlwrap ./ggsci"

[oracle@shell]# . .bashrc %% alias
[oracle@shell]# mkdir /u01/app/oracle/product/ogg_src

NOTE: You may change the directory name created above, it must be within your oracle installations product directory, but you may name it whatever you wish. On later installations, I suffixed the directory with the version number (ogg_src_12-3).

Run the OGG installer:

Connect to the console of the server, VM Console if virtualised, or physical KVM console if using a dedicated system. You need to run the next steps in a graphical environment. This guide assumes you have a functioning X server or other compatible desktop environment to use.

Log on as your Oracle user, open a Terminal window:

[oracle@shell]# cd /stage/fbo_ggs_Linux_x64_shiphome/Disk1
[oralce@shell]# ./runInstaller

The graphical OGG installer will now start. Follow the on screen instructions.

Select 12c when prompted.

Your details here may differ to the screenshot shown.

Software Location: The full working path to the ogg product folder that you created earlier
Start Manager: Checked (starts manager as automatic Linux server)
Database Location: The oracle DB Home location of your instance
Manager Port: I’ve used a slightly different port, you are welcome to use whatever you wish, but be sure to substitute it in later steps of the install.

Let the installer complete.

Done, installation is complete. We will now work on installing OGG into Windows Server 2016.

Installing OGG into Microsoft Windows Server 2016 Datacentre:

<<< On the TARGET SERVER >>>

Head over to https://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html and download the relevant version of OGG for Windows Server MSSQL. At time of writing it should be “Oracle GoldenGate for SQL Server (CDC Capture) on Windows (64bit) – 75Mb. Transfer the downloaded Zip to your MSSQL Server.

Create a new directory, for this example, we are using “C:/GoldenGate”, copy the contents of the extracted ZIP into the new directory.

Open an Administrator level, elevated command prompt, and change directory to the GoldenGate directory you created.

Run GGSCI and create the OGG subdirectories:

C:/Users/oggdba> cd C:/GoldenGate
C:/GoldenGate> ggsci.exe


Give the MGR process a custom name:




Install the OGG Manager as a service, with some options:

C:/GoldenGate> install.exe ADDEVENTS
C:/GoldenGate> install.exe ADDSERVICE
C:/GoldenGate> install.exe AUTOSTART
C:/GoldenGate> install.exe ADDEVENTS

Restart your windows system and verify the OGG MGR starts on boot, verify this with:


Create MSSQL Target Database, Schema, User and DSN:

This section will outline the basics of setting up the OGG Target DB and DSN, although this should be taken with some interpretation, use your own settings, permissions, naming schemes etc. as appropriate.

<<< On the TARGET SERVER >>>

Open SQL Server Management Studio, and create a new database to be used for storing your OGG replicated data set:

Create the new DB.

Name it something sensible.

In my experience, you MUST change the Collation (default character set) to “Latin1_General_BIN2”. Without this set, I usually run into issues trying to replicate certain Unicode characters in fields in the source DB.

Create SCHEMA within new DB:

Right click on your new DB, and select “New Query”, type:


NOTE: “SCHEMA1” must be the name of your source SCHEMA that you are replicating.

Create the new User, and give SCHEMA ownership to user:

Right click “Security” in the SQL Instance branch (not within the Database), and select New Login.

Ensure SQL Server Authentication is used, and set a secure password. Select your recently created DB as the users default DB, and choose “British English” as the users default language.

Within “User Mapping”, check the DB you just created, and ensure “db_owner” is selected. Take this opporunity to ser the default SCHEMA to the SCHEMA you created earlier.

Create System DSN for use by OGG:

Open Control Panel, Administrative Tools, and open “ODBC Data Sources (64bit)”. Change tab to “System DSN” and click the ADD button.

Select “ODBC Driver 11 for SQL Server”, name your DSN something logical and simple, in this example “oggrepldsn”, select the local SQL Server instance from the drop down. Ensure you select to use SQL Server Authentication. Check the box to connect to SQL to obtain additional settings, use the user you created earlier.

On the next screen, change the default DB to the DB created earlier. Leave everything else untouched. And finish the DSN Wizard.

Configuring GGSCI and Preparing for Initial Data Load

<<< On the SOURCE SERVER >>>

Verify the manager is running OK:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ggsci


[Here you may add any additional manager options you want, by default, you only need the PORT parameter]


[Verify the manager is running, you may also use START or STOP MGR]

Create Schema TRANDATA

GGSCI> DBLOGIN USERID <schema-user-here>
Password: <user-pass-here>

Substitute “SCHEMA1” for your schema you wish to replicate.

NOTE: Use of “ADD TRANDATA” only adds TRANDATA for the tables specified by your selection after it. If you add new tables after this is generated, new tables will have no TRANDATA, and therefore will not be able to be replicated until TRANDATA has been added. This is fine for me and this example, however a more robust solution would be to use ADD SCHEMATRANDATA, which adds at schema level, rather than table level, and new tables within the schema, are automatically included in the TRANDATA.

Verify that the TRANDATA is added OK:


Create source table definition parameters:


DEFSFILE /u01/app/oracle/product/ogg_src/dirdef/<filename-here>.def, PURGE 
USERID <oracle-user> PASSWORD <oracle-user-password>

Substitute a relevant .def file name into DEFSFILE parameter, you’ll need to use this later.

NOTE: In my example, I exclude some tables that I know I am not going to need in my replication. You may or may not want to do this. Be aware that you cannot generate definitions for externally organized tables (if you’re using them).

Generate the source table definitions using DEFGEN:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ./defgen paramfile dirprm/defgen.prm

This creates the .def file within ./dirdef/

The generated *.def file now needs to be transferred to the TARGET SERVER, and placed within $INSTALL_DIR/dirdef/

Configure Initial Data Load EXTRACT

These steps configure the initial load groups that will copy source data and apply it to the target tables.

<<< On the SOURCE SERVER >>>

Add the initial data load EXTRACT batch task group:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ggsci


NOTE: EINI9001 is created from the following format EINI<unique ID, max 4 digits>

Verify the EXTRACT created with the following:


Configure the initial data load EXTRACT PARAM file:


-- GoldenGate Initial Data Capture
USERID <oracle schema user here>, PASSWORD <oracle schema password here>

<<< On the TARGET SERVER >>>

Add the initial data load REPLICAT batch task group:


-- GoldenGate Initial Data Load Delivery 
TARGETDB oggrepldsn, USERID oggrepluser, PASSWORD <SQL user password here>
DISCARDFILE ./dirrpt/RINI9001.txt, PURGE 
SOURCEDEFS ./dirdef/<definition-file-name-from-earlier>.def OVERRIDE

INTERLUDE – Getting to this point in the guide assumes you have created the relevant tables/DDL in your target MSSQL database. OGG EXTRACT and REPLICAT processes will not create tables for you within MSSQL, it expects them to be there to insert into on REPLCAT. There is no agreed method of how best to do this. Me personally, I export DDL from SQL Developer, and then spend a lot of time pruning that output for JUST the CREATE TABLE and KEY statements. Of course, you’re then left with a lot of DDL statements that are only good for use within Oracle. You’ll need to convert them into SQL that MSSQL understands. There are many ways to do this, there are premium paid for 3rd party tools, and there are also free online tools such as SQLLines. You could also do it manually if you didn’t have many tables, although I wouldn’t recommend that.

<<< On the SOURCE SERVER >>>

Start the initial data load EXTRAC process:


View its progress with:


NOTE: There may be many errors to resolve on your first EXTRACT RUN, table names not existing, data type mismatches, column names not existing, permissions, network level restrictions such as firewalls etc.

Assuming the EXTRACT runs, REPLICAT will start on the TARGET SERVER, verify this, and its results, with the following on the TARGET SERVER:


If you have made it this far, you now have a DB in MSSQL with your Oracle data set in it, congrats! If that’s all you wanted, you can stop here, but most of the time, you will be aiming for live change data replication from Oracle. For this, we need to make use of a few more components of OGG.

Specifically, CDC and CDD. Change Data capture (via EXTRACT on SOURCE), and Change Data Delivery (via REPLICAT on TARGET). The next section explains how to do this.

Configuring Change Data Capture via EXTRACT

Through the use of trail files being shipped from SOURCE to TARGET, OGG can replicate changes in data detected at source (and written to the trail files). Here’s how to do that.

<<< On the SOURCE SERVER >>>

Add the EXTRACT group for CDC:


NOTE: “THREADS” is an integer of how many EXTRACT threads are maintained to read the differe4nt redo logs on the different Oracle Instance Nodes. If you are not running an Oracle Cluster, or RAC, then set this to 1, setting a higher value does not improve single instance performance.

Verify it created OK with:


Configure the EXTRACT group for CDC:


-- Change Capture parameter file to capture
USERID <sql-user-name>, PASSWORD <oracle-user-password>
RMTHOST <target-server-IP-address>, MGRPORT 7890
RMTTRAIL ./dirdat/1p

NOTE: The 2 character (max) identifier at the end of RMTTRAIL is important, make it unique, and remember it for later.

Create the GoldenGate Trail:


Verify that it created OK:


And verify the results:


Configuring Change Data Delivery via REPLICAT

The trail files defined earlier will be present on the TARGET server now, and they can be used by a CDD REPLICAT process to live replicate changed data from the TARGET.


Edit Global PARAMs and create the checkpoint table:

Create REPLICAT checkpoint group:


NOTE: The two letter prefix for EXTTRAIL is the same as earlier.

Configure REPLICAT PARAM file for CDD:


TARGETDB oggrepldsn, USERID oggrepluser, PASSWORD <sql-user-password>
SOURCEDEFS ./dirdef/1pmoracle.def

Start the REPLICAT process:


Verify it is running with:



Providing everything is running without issue, you are now finished, and you have a love replication scenario shipping data from Oracle 12c in Oracle Linux 7, into MSSQL 2014, in Windows Server 2016. This will continue to run all the time that you have the EORA and RMSS processes running. The initial data load EXTRACT and REPLICAT of EINI and RINI are redundant, unless you happen to ever want to drop your whole data set from MSSQL and have it replicated from scratch again.

Some of the above processes may seem simple, however documentation on a lot of it is few and far between, and when it can be found within Oracle Documentation, it is not often easy to interpret. In our testing, I was able to see change data appear in TARGET after altering it in SOURCE around 1second after committing in SOURCE.

Please feel free to reach out to me with any questions you may have. I can’t promise I can answer them all, but I will do my best to assist if I can.

Posted on  - By

‘Dirty Cow’ may sound humorous and far strung from the world of IT Systems Security, but the truth couldn’t be more different. Gaining its name from a play on the acronym crafted from the Linux Kernel mechanism ‘Copy On Write’, Dirty Cow is the latest in a seemingly never-ending timeline of Linux Kernel exploits.

The theory is relatively simple, a malicious application will set up a race condition in order to be able to effectively modify a root owned file (executable or otherwise) when mapped into the personal memory space of a non-privileged user. These changes are then committed to storage by the Kernel.. Not ideal. TheRegister.co.uk explained the process perfectly:

The exploit works by racing Linux’s CoW mechanism. First, you have to open a root-owned executable as read-only and mmap() it to memory as a private mapping. The executable is now mapped into your process space. The executable has to be readable by the process’s user to do this.

Meanwhile, you repeatedly call madvise() on that mapping with MADV_DONTNEED set, which tells the kernel you don’t actually intend to use the memory.

Then in another thread within the same process, you open /proc/self/mem with read-write access. This is a special file that allows a process to access its own virtual memory as if it was a file. Using normal seek and write operations, you then repeatedly overwrite part of your own memory that’s mapped to the root-owned executable. The overwrite shouldn’t affect the executable on disk.

So now, your process has the read-only binary mapped in as a private read-only object, one thread is spamming madvise() on that read-only object, and another thread is writing to that read-only object. Writing to that memory object should trigger a CoW: the touched page of the executable will be altered only in the process’s memory – not the actual underlying root-owned file that’s mapped in.

However, due to the aforementioned bug, the kernel performs the CoW operation but then allows the process to write to the read-only mapped executable anyway. These changes are committed to disk by the kernel, which is bad news.
Whilst this exploit technically isn’t new (it’s been present in Kernel versions dating back to 2007), it has rocketed its priority and significance due to public acknowledgement in major bug trackers. Fully working code releases that make (malicious) use of this exploit are now circulating in infosec communities, ripe for misuse. Thankfully, most major distributions have already released patches to resolve the bug.

RedHat – https://access.redhat.com/security/cve/cve-2016-5195
Debian – https://security-tracker.debian.org/tracker/CVE-2016-5195
Ubuntu – http://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-5195.html

Linux Kernel creator and (still) key developer, Linus Torvalds, summarised the fix in his own release last week:

This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a (“Fix get_user_pages() race for write access”) but that was then undone due to problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”). In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better).
Read the full release here

Posted on  - By

In this guide, I show you how to install Postfix and PostFWD (Postfix Firewall Daemon), configure rate limiting for a specific recipient domain, and integrate PostFWD into Postfix.


PostFWD v1.0+ (we will install v1.3.5)
Postfix v2.5+ (we will install v2.6.6)
CentOS 6.x (we are working in 6.8 x64)
You may also need things such as nc (netcat), telnet, and various Perl modules (detailed later)

Install Postfix

Postfix is a strong, reliable and extremely common SMTP server. CentOS 6 comes preinstalled with Postfix, but to use PostFWD you need to ensure you are running a version higher than 2.5.

Find out using ‘rpm’:

[root@server]# rpm -qa | grep postfix

Or use ‘yum’:
[root@server]# yum info postfix

Once installed, if for some reason you were using sendmail as your default MTA (Mail Transfer Agent), you’ll need to change this to postfix using ‘alternatives’:
[root@server]# alternatives --set mta /usr/sbin/postfix

Check you are running a valid version of Postfix:
[root@server]# postconf mail_version
mail_version = 2.6.6

Ensure Postfix starts on a system reboot:
[root@server]# chkconfig postfix on

Configure Postfix

Configuring Postfix is a rather open ended task, and will depend on what you are using the SMTP server for. If you have come this far, you likely already have a Postfix configuration, or you are simply using it to relay mails for a specific application. Either way, you should look to set some of the most basic Postfix configuration options in ‘/etc/postfix/main.cf’:

myhostname = Set as the mail servers FQDN/hostname
mydomain = The domain name of the mail server
myorigin = Usually the same as $mydomain
inet_interfaces = Set to all to listen on all network interfaces
mydestination = $myhostname, localhost, $mydomain
mynetworks =, /32
relay_domains = $mydestination
home_mailbox = Maildir/

If you are relaying from a specific location/server, you will of course need to adjust how you do this. This How-To is not a Postfix/SMTP Server configuration guide. It is a PostFWD integration guide to Postfix.

Install PostFWD

PostFWD, or Postfix Firewall Daemon, is a daemonized process that acts as a check policy service for postfix. It has a customisable rule-set that it applies dynamically to any and all mail that Postfix sees, we’ll touch more on that later. It’s very powerful, and offers several mail handling features that would otherwise not be possible in Postfix alone (or any other MTA for that matter).

We need version 1.0 or higher, so grab the tarball from postfwd.org, and run through some initial setup steps:
[root@server]# cd /usr/local
[root@server]# wget http://postfwd.org/postfwd-1.35.tar.gz 
[root@server]# tar -xvzf postfwd-1.35.tar.gz
[root@server]# mv postfwd-1.35 postfwd
[root@server]# cp /usr/local/postfwd/etc/postfwd.cf /etc/postfix/
[root@server]# cp /usr/local/postfwd/bin/postfwd-script.sh /etc/init.d/postfwd
[root@server]# chkconfig postfwd on
[root@server]# service postfwd start

Woah there, it’s not that easy.. As the PostFWD documentation states quite adamantly, this will not work (or start) without a couple of Perl modules installed.

[root@server]# yum -y install perl perl-CPAN perl-prefork gcc

You’ll need to do the rest in ‘cpan’
[root@server]# cpan
cpan[1]> install Net::Server::Daemonize
cpan[1]> install Net::Server::Multiplex
cpan[1]> install Net::Server::DNS

Once all of the Perl modules (and Perl) are installed, it’d be a great idea to issue a yum update, and reboot the system. Now you are ready to continue and configure PostFWD.

In terms of configuration, the world is your oyster with PostFWD. As the name suggests, it is essentially a firewall for your mail server, it can allow, drop, defer, reject silently, rate limit, rule match by message character counts, body sizes, send frequency, or a combination of any number of these factors.. Want to stop users x, y and z from sending more than 200Mb’s worth of attachments in a 12 hour period? No problem.

In this specific example, we want to rate limit (rather aggressively) all outbound mail to a specific domain. Specifically we don’t want to be sending any more than 10 emails every 30 minutes. Mails sent after this limit is reached will get rejected permanently. Mails within that limit can send at any frequency (unlike the stock implementation of rate limiting within postfix itself, where 10 emails in 30 minutes limit would delay ALL mails, and send 1 mail every 3 minutes, sending ALL mails eventually. In this scenario, that is not helpful.)

Check everything’s working:

At this point it’s a good sanity prod to check if everything is up and listening on the ports you expect them to be. Use netstat to have a look at the two ports in question, you should see something strikingly similar to the below.

[root@server]# netstat -anpl | grep ':10040|:25'
tcp        0      0   *                   LISTEN      10181/postfwd.pid
tcp        0      0        *                   LISTEN      10278/master
tcp        0      0 :::25                       :::*                        LISTEN      10278/master

If you don’t see the above, it means one of both of the services are either not running, or not able to bind to their respective ports, check the services are running, check things like SELinux aren’t stopping applications from binding to ports, check messages or your other syslog locations for evidence of problems.

Configuring PostFWD:

Earlier on, you copied postfwd.cf into /etc/postfix. It’s time to configure that with your rules. We are going to be defining just one, to rate limit as described above, but you will likely want a lot more, and also a catch-all style rule to be able to match “everything else”. Remember that our example was built on a custom internal mail server that has one specific task to do.

In this example, the only parts of the pre-supplied postfwd.cf we keep are the following:
[root@server]# cat /etc/postfix/postfwd.cf
## Definitions
# Whitelists
## Ruleset
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)

# Whitelists
## Ruleset
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)

Note our rate limiting rule, the syntax is fairly straight forward. Define the recipient domain, give it the ‘rate’ action, and then tell it how many messages to limit, in what time frame, and then what triggered action happens if it is met. For us, we chose to reply with a 421 4.7.1 SMTP reply, thus rejecting the inbound RCPT command from the mail server.

Once you have your rule in place, check that PostFWD parses it correctly:
[root@server]# /usr/local/postfwd/sbin/postfwd -f /etc/postfix/postfwd.cf -C
Rule   0: id->"ratelimit001"; action->"rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)"; recipient_domain->"==;domain.com"


Trigger the rate limit manually to see how PostFWD replies to it:
PostFWD comes with a “sample request” file that you can pipe into PostFWD to see how it reacts to differing rules. Modify the following file enough to suit your rate limit criteria

Now throw that sample request at PostFWD using netcat (you may need to install this with ‘yum install nc’).
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample

The action “DUNNO”, although worrying at first, is actually the desired outcome. PostFWD doesn’t know what to do with the message, so it states “DUNNO” back to Postfix and lets the message pass. Keep firing that command until you hit your rate limit.

[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
[root@server]# nc 10040 </usr/local/postfwd/tools/request.sample
action=421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.

BINGO! We hit the rate limit (I’ve excluded pointless command repetition from this guide). You can see that as soon as the rate limit is hit, PostFWD applies our own custom action that we set earlier. 421 4.7.1, message rejected. Now we just need to make that happen automatically, and with Postfix.

Integration with Postfix

The integration of PostFWD into Postfix is realtively simple. For this example, we are going to be adding PostFWD as a check_policy_service server for postfix to look up against. As we are specifically filtering on the recipient domain, I am going to add this to the “smtpd_recipient_restrictions” section of Postfix. This section may or may not exist already in your Postfix’s main.cf.

Open /etc/postfix/main.cf and add or amend the following:
smtpd_recipient_restrictions =
       check_policy_service inet:
       reject_unauth_destination = 3600

The key to note here, is that the check_policy_service is ABOVE items such as permit_mynetworks. For us, localhost is a trusted net (see the config earlier on), our mails that we wish to rate limit are also from localhost, so if permit_mynetworks comes first, the messages would be forever passed and sent, as Postfix would never bother checking with PostFWD via the check_policy_service (it stops processing after a successful OK reply).

And that’s it.. Restart postFWD, and then restart Postfix (PostFWD should always be up before Postfix), and you’re good to go. Rate Limit events are logged to /var/log/maillog, along with all other successful or not mail operations. You’ll want to tail this log for a while to see if anything’s going wrong.


A nice and controlled way of testing with actual mail is to telnet into Postfix from the system itself.
Connected to
Escape character is '^]'.
220 mailtest1.vooservers.com ESMTP Postfix
HELO mail.domain.com
250 monitoringtest.vooservers.com
MAIL FROM: test@domain.com
250 2.1.0 Ok
RCPT TO: test@domain.com
250 2.1.5 Ok
354 End data with <CR><LF>.<CR><LF>
message goes here
250 2.0.0 Ok: queued as 5BECA21C21
221 2.0.0 Bye
Connection closed by foreign host.

This connects to the SMTP server (postfix), HELO’s as a mail server, defines a FROM: address, defines and TO: address, inputs some message body data, and then quits after the message is queued in postfix. Everything in yellow is text you have to type in.

You can repeat this until you hit your rate limit, tail the maillog in another screen whilst you do this, you’ll see Postfix happily relay all the mail up until you hit your defined rate limit, PostFWD will then step in and reply with the 421 message back to your telnet session. You’ll never get a chance to input a TO: address or any message body data. Perfect.


So to recap, we:
  • Installed Postfix and set it as the systems default MTA
  • Configured the basics of Postfix just to get it to function in a primal MTA state
  • Installed PostFWD
  • Configured and tested rate limiting rules in PostFWD
  • Integrated PostFWD with the recipient check stage of Postfix

The possibilities with PostFWD are extremely numerous, I’d recommend anyone embarking on this to check out the full documentation of both Postfix and PostFWD. Something that proved invaluable to me at times during our configuration and testing of this (and multiple other PostFWD instances).


Posted on  - By

If you have one or many MySQL Replication Slaves, you may need a handy way to monitor each slaves’ status within your existing Nagios Monitoring Platform. This handy NRPE based bash script will help you out…

# SQL Binary Replication Failure Detection      #
# Dave Byrne @ VooServers Ltd                   #
#Is the Slave IO Running?
slaveio=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_IO_Running | awk '{ print $2 }'`
#Is the Slave SQL Running?
slavesql=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_SQL_Running | awk '{ print $2 }'`

#Pull the Last SQL Error just in case
lasterror=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Last_Error | awk -F : '{ print $2 }'`
#Work out if its failed or not..
if [ "$slavesql" = "No" ] || [ "$slaveio" = "No" ];
  #Its failed, go CRITICAL
  echo "Slave IO Running? ... "$slaveio
  echo "Slave SQL Running? ... "$slavesql
  echo "Last SQL Error:  "$lasterror
  echo "CRITICAL - MySQL Replication Failure!"
  exit 2
  #Its good, go OK
  echo "OK - MySQL Replication Running"
  echo $slavesql
  exit 0


  • Enter your MySQL Root users password where applicable.
  • If either the Slave IO or the Slave SQL stops running, the check will return CRITICAL in Nagios.
  • Does not require SUDO action, run straight from nrpe.cfg

Older Posts

 Download our Company Newsletter
© VooServers Ltd 2016 - All Rights Reserved
Company No. 05598156