VooServers
Reliable hardware  -  Trained Staff

Intel Layer 1 Terminal Fault (L1TF) “ForeShadow” Virtualised Platform Exploit

Horizontal White Line


You are here:  Support / Technical Blog

Posted on  - By

What is it?

The security conscious among you will be well versed in the technicalities of Intel MicroCode exploits such as Spectre and Meltdown, affecting Intel Core, Celeron, Pentium, Xeon and even Atom CPU’s (along with a whole host of AMD based chips). Ever keen to keep Intel’s Security team on their feet, researchers from Belgium, Israel, the USA and Australia have discovered an exploit within intel’s SGX instruction set. On the 14th August 2018, Intel released information regarding this new variant of side channel cached data exploit known as “Foreshadow”, a Layer 1 data cache exploit with the ability to render guest VM data readable to other guests in a virtualised platform that makes use of SGX extensions on an intel CPU.

There’s a difference however, this time, L1TF Foreshadow (referred to as L1TF here on out) only affects Intel CPU’s using SGX, and SGX (Software Guard Extensions) is an instruction set only present on intel’s “Core” line-up of CPU’s. So that’s the old trust Core and Core2 ranges, along with the newer Core i3, i5, i7 and even i9 chips.

Does it affect you?

VooServers enterprise level infrastructure clients and those within our hosted virtual environments will be pleased to know that we do not make use of any “Core” chips from Intel. Our core service backbone, and our bespoke enterprise scenarios are comprised solely of Xeon CPU’s. As such, there is no scope whatsoever for data breaches utilising this exploit for customers within VooServers managed infrastructure.

(There may be a negligible quantity of unmanaged, custom dedicated server customers with aging, legacy hardware that could be affected, however these are not virtualisation environments and hence should pose no risk to customer data. If you feel you are affected by this, please reach out to our support team at support@vooservers.com)

Sources





Posted on  - By

Overview & Version Information:

I will be showing how to install and configure Oracle Fusion Middleware Golden Gate 12.3 for the purposes of data replication from a 12c instance on Oracle Linux 7, into an MSSQL Server 2014 Std instance on Windows Server 2016 of a full Oracle SCHEMA.

  • Oracle Golden Gate v12.3.0.1.4 (For Oracle Linux 7)
  • Oracle Golden Gate v12.3.0.1.6 (For Windows Server 2016)
  • Oracle Linux 7.4 (Kernel 4.1.12-112.14.2.el7uek.x86_64)
  • Oracle 12c (v12.1.0.2.0)
  • Windows Server 2016 (x64 Datacentre)
  • Microsoft SQL Server 2014 (Standard v12.0.5207.0)

We will be making use of EXTRACT and REPLICAT processes for the initial data load, and also utilising TRAIL’s, CDC and CDD to handle the live change data replication.

Throughout this article, Oracle Golden Gate will be referred to as OGG.


Installing OGG into Oracle Linux 7 (12c DB):


Head to https://edelivery.oracle.com and download the relevant OGG 12.3 DLP, at time of writing, 12.3.0.1.4 is “V975837-01.zip”. Transfer this zip file onto a convenient location on your OL7 server.

<<< On the SOURCE SERVER >>>

On OL7, create the staging directory, and prepare by installing readline wrapper:

[root@shell]# mkdir /stage
[root@shell]# mv /path/to/zipfile.zip /stage/
[root@shell]# yum –y install readline readline-devel
[root@shell]# cd /stage
[root@shell]# wget ftp://ftp.pbone.net/mirror/download.fedora.redhat.com/pub/fedora/epel/7/x86_64/Packages/r/rlwrap-0.42-1.el7.x86_64.rpm
[root@shell]# unzip V975837-01.zip
[root@shell]# yum install rlwrap-0.42-1.el7.x86_64.rpm


Setup aliases in OL7 for GGSCI and SQLPLUS:

[root@shell]# su -l oracle
[oracle@shell]# nano ~/.bashrc

# Aliases for GoldenGate
alias sqlplus="rlwrap sqlplus"
alias ggsci="rlwrap ./ggsci"

[oracle@shell]# . .bashrc %% alias
[oracle@shell]# mkdir /u01/app/oracle/product/ogg_src


NOTE: You may change the directory name created above, it must be within your oracle installations product directory, but you may name it whatever you wish. On later installations, I suffixed the directory with the version number (ogg_src_12-3).

Run the OGG installer:

Connect to the console of the server, VM Console if virtualised, or physical KVM console if using a dedicated system. You need to run the next steps in a graphical environment. This guide assumes you have a functioning X server or other compatible desktop environment to use.

Log on as your Oracle user, open a Terminal window:

[oracle@shell]# cd /stage/fbo_ggs_Linux_x64_shiphome/Disk1
[oralce@shell]# ./runInstaller




The graphical OGG installer will now start. Follow the on screen instructions.




Select 12c when prompted.




Your details here may differ to the screenshot shown.


Software Location: The full working path to the ogg product folder that you created earlier
Start Manager: Checked (starts manager as automatic Linux server)
Database Location: The oracle DB Home location of your instance
Manager Port: I’ve used a slightly different port, you are welcome to use whatever you wish, but be sure to substitute it in later steps of the install.




Let the installer complete.




Done, installation is complete. We will now work on installing OGG into Windows Server 2016.


Installing OGG into Microsoft Windows Server 2016 Datacentre:


<<< On the TARGET SERVER >>>

Head over to https://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html and download the relevant version of OGG for Windows Server MSSQL. At time of writing it should be “Oracle GoldenGate 12.3.0.1.6 for SQL Server (CDC Capture) on Windows (64bit) – 75Mb. Transfer the downloaded Zip to your MSSQL Server.

Create a new directory, for this example, we are using “C:/GoldenGate”, copy the contents of the extracted ZIP into the new directory.

Open an Administrator level, elevated command prompt, and change directory to the GoldenGate directory you created.

Run GGSCI and create the OGG subdirectories:

C:/Users/oggdba> cd C:/GoldenGate
C:/GoldenGate> ggsci.exe

GGSCI> CREATE SUBDIRS


Give the MGR process a custom name:

GGSCI> EDIT PARAM ./GLOBALS

MGRSERVNAME name-here

GGSCI> EXIT


Install the OGG Manager as a service, with some options:

C:/GoldenGate> install.exe ADDEVENTS
C:/GoldenGate> install.exe ADDSERVICE
C:/GoldenGate> install.exe AUTOSTART
C:/GoldenGate> install.exe ADDEVENTS


Restart your windows system and verify the OGG MGR starts on boot, verify this with:

GGSCI> INFO MGR



Create MSSQL Target Database, Schema, User and DSN:


This section will outline the basics of setting up the OGG Target DB and DSN, although this should be taken with some interpretation, use your own settings, permissions, naming schemes etc. as appropriate.

<<< On the TARGET SERVER >>>

Open SQL Server Management Studio, and create a new database to be used for storing your OGG replicated data set:



Create the new DB.



Name it something sensible.



In my experience, you MUST change the Collation (default character set) to “Latin1_General_BIN2”. Without this set, I usually run into issues trying to replicate certain Unicode characters in fields in the source DB.

Create SCHEMA within new DB:

Right click on your new DB, and select “New Query”, type:

CREATE SCHEMA “SCHEMA1”;


NOTE: “SCHEMA1” must be the name of your source SCHEMA that you are replicating.

Create the new User, and give SCHEMA ownership to user:

Right click “Security” in the SQL Instance branch (not within the Database), and select New Login.



Ensure SQL Server Authentication is used, and set a secure password. Select your recently created DB as the users default DB, and choose “British English” as the users default language.



Within “User Mapping”, check the DB you just created, and ensure “db_owner” is selected. Take this opporunity to ser the default SCHEMA to the SCHEMA you created earlier.



Create System DSN for use by OGG:

Open Control Panel, Administrative Tools, and open “ODBC Data Sources (64bit)”. Change tab to “System DSN” and click the ADD button.



Select “ODBC Driver 11 for SQL Server”, name your DSN something logical and simple, in this example “oggrepldsn”, select the local SQL Server instance from the drop down. Ensure you select to use SQL Server Authentication. Check the box to connect to SQL to obtain additional settings, use the user you created earlier.

On the next screen, change the default DB to the DB created earlier. Leave everything else untouched. And finish the DSN Wizard.


Configuring GGSCI and Preparing for Initial Data Load


<<< On the SOURCE SERVER >>>

Verify the manager is running OK:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ggsci

GGSCI> EDIT PARAM MGR

[Here you may add any additional manager options you want, by default, you only need the PORT parameter]

GGSCI> INFO MGR

[Verify the manager is running, you may also use START or STOP MGR]

Create Schema TRANDATA

GGSCI> DBLOGIN USERID <schema-user-here>
Password: <user-pass-here>
GGSCI> ADD TRANDATA SCHEMA1.*


Substitute “SCHEMA1” for your schema you wish to replicate.

NOTE: Use of “ADD TRANDATA” only adds TRANDATA for the tables specified by your selection after it. If you add new tables after this is generated, new tables will have no TRANDATA, and therefore will not be able to be replicated until TRANDATA has been added. This is fine for me and this example, however a more robust solution would be to use ADD SCHEMATRANDATA, which adds at schema level, rather than table level, and new tables within the schema, are automatically included in the TRANDATA.

Verify that the TRANDATA is added OK:

GGSCI> INFO TRANDATA SCHEMA1.*


Create source table definition parameters:

GGSCI> EDIT PARAM DEFGEN

DEFSFILE /u01/app/oracle/product/ogg_src/dirdef/<filename-here>.def, PURGE 
USERID <oracle-user> PASSWORD <oracle-user-password>
TABLEEXCLUDE SCHEMA1.TABLEA;
TABLEEXCLUDE SCHEMA1.TABLEB;
TABLE SCHEMA1.*;


Substitute a relevant .def file name into DEFSFILE parameter, you’ll need to use this later.

NOTE: In my example, I exclude some tables that I know I am not going to need in my replication. You may or may not want to do this. Be aware that you cannot generate definitions for externally organized tables (if you’re using them).

Generate the source table definitions using DEFGEN:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ./defgen paramfile dirprm/defgen.prm


This creates the .def file within ./dirdef/

The generated *.def file now needs to be transferred to the TARGET SERVER, and placed within $INSTALL_DIR/dirdef/


Configure Initial Data Load EXTRACT


These steps configure the initial load groups that will copy source data and apply it to the target tables.

<<< On the SOURCE SERVER >>>

Add the initial data load EXTRACT batch task group:

[oracle@shell]# cd /u01/app/oracle/product/ogg_src
[oracle@shell]# ggsci

GGSCI> ADD EXTRACT EINI9001, SOURCEISTABLE


NOTE: EINI9001 is created from the following format EINI<unique ID, max 4 digits>

Verify the EXTRACT created with the following:

GGSCI> INFO EXTRACT *, TASKS


Configure the initial data load EXTRACT PARAM file:

GGSCI> EDIT PARAMS EINI9001

--
-- GoldenGate Initial Data Capture
--
EXTRACT EINI9001
USERID <oracle schema user here>, PASSWORD <oracle schema password here>
RMTHOST <IP of TARGET SERVER here>, MGRPORT 7890
RMTTASK REPLICAT, GROUP RINI9001
TABLEEXCLUDE SCHEMA1.CAP_*;
TABLEEXCLUDE SCHEMA1.DR$*;
TABLE SCHEMA1.*;


<<< On the TARGET SERVER >>>

Add the initial data load REPLICAT batch task group:

GGSCI> ADD REPLICAT RINI9001, SPECIALRUN
GGSCI> INFO RINI9001*, TASKS
GGSCI> EDIT PARAMS RINI9001

-- 
-- GoldenGate Initial Data Load Delivery 
-- 
REPLICAT RINI9001 
TARGETDB oggrepldsn, USERID oggrepluser, PASSWORD <SQL user password here>
DISCARDFILE ./dirrpt/RINI9001.txt, PURGE 
SOURCEDEFS ./dirdef/<definition-file-name-from-earlier>.def OVERRIDE
SOURCECHARSET PASSTHRU
MAP SCHMEA1.*, TARGET SCHEMA1.*;


INTERLUDE – Getting to this point in the guide assumes you have created the relevant tables/DDL in your target MSSQL database. OGG EXTRACT and REPLICAT processes will not create tables for you within MSSQL, it expects them to be there to insert into on REPLCAT. There is no agreed method of how best to do this. Me personally, I export DDL from SQL Developer, and then spend a lot of time pruning that output for JUST the CREATE TABLE and KEY statements. Of course, you’re then left with a lot of DDL statements that are only good for use within Oracle. You’ll need to convert them into SQL that MSSQL understands. There are many ways to do this, there are premium paid for 3rd party tools, and there are also free online tools such as SQLLines. You could also do it manually if you didn’t have many tables, although I wouldn’t recommend that.

<<< On the SOURCE SERVER >>>

Start the initial data load EXTRAC process:

GGSCI> START EXTRACT EINI9001


View its progress with:

GGSCI> VIEW REPORT EINI9001


NOTE: There may be many errors to resolve on your first EXTRACT RUN, table names not existing, data type mismatches, column names not existing, permissions, network level restrictions such as firewalls etc.

Assuming the EXTRACT runs, REPLICAT will start on the TARGET SERVER, verify this, and its results, with the following on the TARGET SERVER:

GGSCI> VIEW REPORT RINI9001


If you have made it this far, you now have a DB in MSSQL with your Oracle data set in it, congrats! If that’s all you wanted, you can stop here, but most of the time, you will be aiming for live change data replication from Oracle. For this, we need to make use of a few more components of OGG.

Specifically, CDC and CDD. Change Data capture (via EXTRACT on SOURCE), and Change Data Delivery (via REPLICAT on TARGET). The next section explains how to do this.


Configuring Change Data Capture via EXTRACT


Through the use of trail files being shipped from SOURCE to TARGET, OGG can replicate changes in data detected at source (and written to the trail files). Here’s how to do that.

<<< On the SOURCE SERVER >>>

Add the EXTRACT group for CDC:

GGSCI> ADD EXTRACT EORA9001, TRANLOG, BEGIN NOW, THREADS 1


NOTE: “THREADS” is an integer of how many EXTRACT threads are maintained to read the differe4nt redo logs on the different Oracle Instance Nodes. If you are not running an Oracle Cluster, or RAC, then set this to 1, setting a higher value does not improve single instance performance.

Verify it created OK with:

GGSCI> INFO EXTRACT EORA9001


Configure the EXTRACT group for CDC:

GGSCI> EDIT PARAM EORA9001

--
-- Change Capture parameter file to capture
--
EXTRACT EORA9001
USERID <sql-user-name>, PASSWORD <oracle-user-password>
RMTHOST <target-server-IP-address>, MGRPORT 7890
RMTTRAIL ./dirdat/1p
TABLEEXCLUDE SCHEMA1.CAP_*;
TABLEEXCLUDE SCHEMA1.DR$*;
TABLE SCHEMA1.*;


NOTE: The 2 character (max) identifier at the end of RMTTRAIL is important, make it unique, and remember it for later.

Create the GoldenGate Trail:

GGSCI> ADD RMTTRAIL ./dirdat/1p EXTRACT EORA9001, MEGABYTES 5


Verify that it created OK:

GGSCI> INFO RMTTRAIL *


And verify the results:

GGSCI> INFO EXTRACT EORA9001, DETAIL 
GGSCI> VIEW REPORT EORA9001



Configuring Change Data Delivery via REPLICAT


The trail files defined earlier will be present on the TARGET server now, and they can be used by a CDD REPLICAT process to live replicate changed data from the TARGET.

On the TARGET SERVER

Edit Global PARAMs and create the checkpoint table:



Create REPLICAT checkpoint group:

GGSCI> ADD REPLICAT RMSS9001, EXTTRAIL ./dirdat/1p


NOTE: The two letter prefix for EXTTRAIL is the same as earlier.

Configure REPLICAT PARAM file for CDD:

GGSCI> EDIT PARAM RMSS9001

REPLICAT RMSS9001
TARGETDB oggrepldsn, USERID oggrepluser, PASSWORD <sql-user-password>
HANDLECOLLISIONS 
SOURCEDEFS ./dirdef/1pmoracle.def
DISCARDFILE ./dirrpt/RMSS9001.DSC, PURGE 
MAP SCHEMA1.*, TARGET SCHEMA1.*;


Start the REPLICAT process:

GGSCI> START REPLICAT RMSS9001


Verify it is running with:

GGSCI> INFO REPLICAT RMSS9001



Summary:


Providing everything is running without issue, you are now finished, and you have a love replication scenario shipping data from Oracle 12c in Oracle Linux 7, into MSSQL 2014, in Windows Server 2016. This will continue to run all the time that you have the EORA and RMSS processes running. The initial data load EXTRACT and REPLICAT of EINI and RINI are redundant, unless you happen to ever want to drop your whole data set from MSSQL and have it replicated from scratch again.

Some of the above processes may seem simple, however documentation on a lot of it is few and far between, and when it can be found within Oracle Documentation, it is not often easy to interpret. In our testing, I was able to see change data appear in TARGET after altering it in SOURCE around 1second after committing in SOURCE.

Please feel free to reach out to me with any questions you may have. I can’t promise I can answer them all, but I will do my best to assist if I can.






Posted on  - By

‘Dirty Cow’ may sound humorous and far strung from the world of IT Systems Security, but the truth couldn’t be more different. Gaining its name from a play on the acronym crafted from the Linux Kernel mechanism ‘Copy On Write’, Dirty Cow is the latest in a seemingly never-ending timeline of Linux Kernel exploits.

The theory is relatively simple, a malicious application will set up a race condition in order to be able to effectively modify a root owned file (executable or otherwise) when mapped into the personal memory space of a non-privileged user. These changes are then committed to storage by the Kernel.. Not ideal. TheRegister.co.uk explained the process perfectly:

The exploit works by racing Linux’s CoW mechanism. First, you have to open a root-owned executable as read-only and mmap() it to memory as a private mapping. The executable is now mapped into your process space. The executable has to be readable by the process’s user to do this.

Meanwhile, you repeatedly call madvise() on that mapping with MADV_DONTNEED set, which tells the kernel you don’t actually intend to use the memory.

Then in another thread within the same process, you open /proc/self/mem with read-write access. This is a special file that allows a process to access its own virtual memory as if it was a file. Using normal seek and write operations, you then repeatedly overwrite part of your own memory that’s mapped to the root-owned executable. The overwrite shouldn’t affect the executable on disk.

So now, your process has the read-only binary mapped in as a private read-only object, one thread is spamming madvise() on that read-only object, and another thread is writing to that read-only object. Writing to that memory object should trigger a CoW: the touched page of the executable will be altered only in the process’s memory – not the actual underlying root-owned file that’s mapped in.

However, due to the aforementioned bug, the kernel performs the CoW operation but then allows the process to write to the read-only mapped executable anyway. These changes are committed to disk by the kernel, which is bad news.
Whilst this exploit technically isn’t new (it’s been present in Kernel versions dating back to 2007), it has rocketed its priority and significance due to public acknowledgement in major bug trackers. Fully working code releases that make (malicious) use of this exploit are now circulating in infosec communities, ripe for misuse. Thankfully, most major distributions have already released patches to resolve the bug.

RedHat – https://access.redhat.com/security/cve/cve-2016-5195
Debian – https://security-tracker.debian.org/tracker/CVE-2016-5195
Ubuntu – http://people.canonical.com/~ubuntu-security/cve/2016/CVE-2016-5195.html

Linux Kernel creator and (still) key developer, Linus Torvalds, summarised the fix in his own release last week:

This is an ancient bug that was actually attempted to be fixed once (badly) by me eleven years ago in commit 4ceb5db9757a (“Fix get_user_pages() race for write access”) but that was then undone due to problems on s390 by commit f33ea7f404e5 (“fix get_user_pages bug”). In the meantime, the s390 situation has long been fixed, and we can now fix it by checking the pte_dirty() bit properly (and do it better).
Read the full release here





Posted on  - By

In this guide, I show you how to install Postfix and PostFWD (Postfix Firewall Daemon), configure rate limiting for a specific recipient domain, and integrate PostFWD into Postfix.


Requirements

PostFWD v1.0+ (we will install v1.3.5)
Postfix v2.5+ (we will install v2.6.6)
CentOS 6.x (we are working in 6.8 x64)
You may also need things such as nc (netcat), telnet, and various Perl modules (detailed later)




Install Postfix

Postfix is a strong, reliable and extremely common SMTP server. CentOS 6 comes preinstalled with Postfix, but to use PostFWD you need to ensure you are running a version higher than 2.5.

Find out using ‘rpm’:

[root@server]# rpm -qa | grep postfix
postfix-2.6.6-6.el6_7.1.x86_64

Or use ‘yum’:
[root@server]# yum info postfix

Once installed, if for some reason you were using sendmail as your default MTA (Mail Transfer Agent), you’ll need to change this to postfix using ‘alternatives’:
[root@server]# alternatives --set mta /usr/sbin/postfix

Check you are running a valid version of Postfix:
[root@server]# postconf mail_version
mail_version = 2.6.6

Ensure Postfix starts on a system reboot:
[root@server]# chkconfig postfix on



Configure Postfix

Configuring Postfix is a rather open ended task, and will depend on what you are using the SMTP server for. If you have come this far, you likely already have a Postfix configuration, or you are simply using it to relay mails for a specific application. Either way, you should look to set some of the most basic Postfix configuration options in ‘/etc/postfix/main.cf’:

myhostname = Set as the mail servers FQDN/hostname
mydomain = The domain name of the mail server
myorigin = Usually the same as $mydomain
inet_interfaces = Set to all to listen on all network interfaces
mydestination = $myhostname, localhost, $mydomain
mynetworks = 127.0.0.0/8, /32
relay_domains = $mydestination
home_mailbox = Maildir/

If you are relaying from a specific location/server, you will of course need to adjust how you do this. This How-To is not a Postfix/SMTP Server configuration guide. It is a PostFWD integration guide to Postfix.



Install PostFWD

PostFWD, or Postfix Firewall Daemon, is a daemonized process that acts as a check policy service for postfix. It has a customisable rule-set that it applies dynamically to any and all mail that Postfix sees, we’ll touch more on that later. It’s very powerful, and offers several mail handling features that would otherwise not be possible in Postfix alone (or any other MTA for that matter).

We need version 1.0 or higher, so grab the tarball from postfwd.org, and run through some initial setup steps:
[root@server]# cd /usr/local
[root@server]# wget http://postfwd.org/postfwd-1.35.tar.gz 
[root@server]# tar -xvzf postfwd-1.35.tar.gz
[root@server]# mv postfwd-1.35 postfwd
[root@server]# cp /usr/local/postfwd/etc/postfwd.cf /etc/postfix/
[root@server]# cp /usr/local/postfwd/bin/postfwd-script.sh /etc/init.d/postfwd
[root@server]# chkconfig postfwd on
[root@server]# service postfwd start

Woah there, it’s not that easy.. As the PostFWD documentation states quite adamantly, this will not work (or start) without a couple of Perl modules installed.

[root@server]# yum -y install perl perl-CPAN perl-prefork gcc

You’ll need to do the rest in ‘cpan’
[root@server]# cpan
cpan[1]> install Net::Server::Daemonize
...
cpan[1]> install Net::Server::Multiplex
...
cpan[1]> install Net::Server::DNS
...

Once all of the Perl modules (and Perl) are installed, it’d be a great idea to issue a yum update, and reboot the system. Now you are ready to continue and configure PostFWD.

In terms of configuration, the world is your oyster with PostFWD. As the name suggests, it is essentially a firewall for your mail server, it can allow, drop, defer, reject silently, rate limit, rule match by message character counts, body sizes, send frequency, or a combination of any number of these factors.. Want to stop users x, y and z from sending more than 200Mb’s worth of attachments in a 12 hour period? No problem.

In this specific example, we want to rate limit (rather aggressively) all outbound mail to a specific domain. Specifically we don’t want to be sending any more than 10 emails every 30 minutes. Mails sent after this limit is reached will get rejected permanently. Mails within that limit can send at any frequency (unlike the stock implementation of rate limiting within postfix itself, where 10 emails in 30 minutes limit would delay ALL mails, and send 1 mail every 3 minutes, sending ALL mails eventually. In this scenario, that is not helpful.)



Check everything’s working:

At this point it’s a good sanity prod to check if everything is up and listening on the ports you expect them to be. Use netstat to have a look at the two ports in question, you should see something strikingly similar to the below.

[root@server]# netstat -anpl | grep ':10040|:25'
tcp        0      0 127.0.0.1:10040             0.0.0.0:*                   LISTEN      10181/postfwd.pid
tcp        0      0 0.0.0.0:25                  0.0.0.0:*                   LISTEN      10278/master
tcp        0      0 :::25                       :::*                        LISTEN      10278/master
[root@server]#

If you don’t see the above, it means one of both of the services are either not running, or not able to bind to their respective ports, check the services are running, check things like SELinux aren’t stopping applications from binding to ports, check messages or your other syslog locations for evidence of problems.



Configuring PostFWD:

Earlier on, you copied postfwd.cf into /etc/postfix. It’s time to configure that with your rules. We are going to be defining just one, to rate limit as described above, but you will likely want a lot more, and also a catch-all style rule to be able to match “everything else”. Remember that our example was built on a custom internal mail server that has one specific task to do.

In this example, the only parts of the pre-supplied postfwd.cf we keep are the following:
[root@server]# cat /etc/postfix/postfwd.cf
##
## Definitions
##
# Whitelists
&&TRUSTED_NETS {
        client_address=127.0.0.1/32
##
## Ruleset
##
##########################################################################
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
id=ratelimit001
        recipient_domain==domain.com
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)
##########################################################################

# Whitelists
&&TRUSTED_NETS {
        client_address=127.0.0.1/32
##
## Ruleset
##
##########################################################################
#Rate Limit TO: domain.com - 10 messages in 1800 seconds (30mins)
id=ratelimit001
        recipient_domain==domain.com
        action=rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)
##########################################################################

Note our rate limiting rule, the syntax is fairly straight forward. Define the recipient domain, give it the ‘rate’ action, and then tell it how many messages to limit, in what time frame, and then what triggered action happens if it is met. For us, we chose to reply with a 421 4.7.1 SMTP reply, thus rejecting the inbound RCPT command from the mail server.

Once you have your rule in place, check that PostFWD parses it correctly:
[root@server]# /usr/local/postfwd/sbin/postfwd -f /etc/postfix/postfwd.cf -C
Rule   0: id->"ratelimit001"; action->"rate(recipient_domain/10/1800/421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.)"; recipient_domain->"==;domain.com"

Great!

Trigger the rate limit manually to see how PostFWD replies to it:
PostFWD comes with a “sample request” file that you can pipe into PostFWD to see how it reacts to differing rules. Modify the following file enough to suit your rate limit criteria
/usr/local/postfwd/tools/request.sample

Now throw that sample request at PostFWD using netcat (you may need to install this with ‘yum install nc’).
[root@server]# nc 127.0.0.1 10040 </usr/local/postfwd/tools/request.sample
action=DUNNO

The action “DUNNO”, although worrying at first, is actually the desired outcome. PostFWD doesn’t know what to do with the message, so it states “DUNNO” back to Postfix and lets the message pass. Keep firing that command until you hit your rate limit.

[root@server]# nc 127.0.0.1 10040 </usr/local/postfwd/tools/request.sample
action=DUNNO
[root@server]# nc 127.0.0.1 10040 </usr/local/postfwd/tools/request.sample
action=DUNNO
[root@server]# nc 127.0.0.1 10040 </usr/local/postfwd/tools/request.sample
action=421 4.7.1 - Sorry, exceeded 10 messages in 30 minutes.

BINGO! We hit the rate limit (I’ve excluded pointless command repetition from this guide). You can see that as soon as the rate limit is hit, PostFWD applies our own custom action that we set earlier. 421 4.7.1, message rejected. Now we just need to make that happen automatically, and with Postfix.



Integration with Postfix

The integration of PostFWD into Postfix is realtively simple. For this example, we are going to be adding PostFWD as a check_policy_service server for postfix to look up against. As we are specifically filtering on the recipient domain, I am going to add this to the “smtpd_recipient_restrictions” section of Postfix. This section may or may not exist already in your Postfix’s main.cf.

Open /etc/postfix/main.cf and add or amend the following:
smtpd_recipient_restrictions =
       check_policy_service inet:127.0.0.1:10040
       permit_mynetworks
       reject_unauth_destination
127.0.0.1:10040_time_limit = 3600

The key to note here, is that the check_policy_service is ABOVE items such as permit_mynetworks. For us, localhost is a trusted net (see the config earlier on), our mails that we wish to rate limit are also from localhost, so if permit_mynetworks comes first, the messages would be forever passed and sent, as Postfix would never bother checking with PostFWD via the check_policy_service (it stops processing after a successful OK reply).

And that’s it.. Restart postFWD, and then restart Postfix (PostFWD should always be up before Postfix), and you’re good to go. Rate Limit events are logged to /var/log/maillog, along with all other successful or not mail operations. You’ll want to tail this log for a while to see if anything’s going wrong.



Testing:

A nice and controlled way of testing with actual mail is to telnet into Postfix from the system itself.
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
220 mailtest1.vooservers.com ESMTP Postfix
HELO mail.domain.com
250 monitoringtest.vooservers.com
MAIL FROM: test@domain.com
250 2.1.0 Ok
RCPT TO: test@domain.com
250 2.1.5 Ok
data
354 End data with <CR><LF>.<CR><LF>
message goes here
.
250 2.0.0 Ok: queued as 5BECA21C21
quit
221 2.0.0 Bye
Connection closed by foreign host.
[root@server]#

This connects to the SMTP server (postfix), HELO’s as a mail server, defines a FROM: address, defines and TO: address, inputs some message body data, and then quits after the message is queued in postfix. Everything in yellow is text you have to type in.

You can repeat this until you hit your rate limit, tail the maillog in another screen whilst you do this, you’ll see Postfix happily relay all the mail up until you hit your defined rate limit, PostFWD will then step in and reply with the 421 message back to your telnet session. You’ll never get a chance to input a TO: address or any message body data. Perfect.



Summary;

So to recap, we:
  • Installed Postfix and set it as the systems default MTA
  • Configured the basics of Postfix just to get it to function in a primal MTA state
  • Installed PostFWD
  • Configured and tested rate limiting rules in PostFWD
  • Integrated PostFWD with the recipient check stage of Postfix


The possibilities with PostFWD are extremely numerous, I’d recommend anyone embarking on this to check out the full documentation of both Postfix and PostFWD. Something that proved invaluable to me at times during our configuration and testing of this (and multiple other PostFWD instances).

References:
http://postfwd.org/doc.html
http://www.postfix.org/documentation.html







Posted on  - By

If you have one or many MySQL Replication Slaves, you may need a handy way to monitor each slaves’ status within your existing Nagios Monitoring Platform. This handy NRPE based bash script will help you out…

#!/bin/bash
# SQL Binary Replication Failure Detection      #
# Dave Byrne @ VooServers Ltd                   #
#################################################
#Is the Slave IO Running?
slaveio=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_IO_Running | awk '{ print $2 }'`
#Is the Slave SQL Running?
slavesql=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Slave_SQL_Running | awk '{ print $2 }'`

#Pull the Last SQL Error just in case
lasterror=`mysql -u root --password="PASSWORD HERE" -Bse "show slave statusG" | grep Last_Error | awk -F : '{ print $2 }'`
#Work out if its failed or not..
if [ "$slavesql" = "No" ] || [ "$slaveio" = "No" ];
then
  #Its failed, go CRITICAL
  echo "Slave IO Running? ... "$slaveio
  echo "Slave SQL Running? ... "$slavesql
  echo "Last SQL Error:  "$lasterror
  echo "CRITICAL - MySQL Replication Failure!"
  exit 2
else
  #Its good, go OK
  echo "OK - MySQL Replication Running"
  echo $slavesql
  exit 0
fi


Notes:

  • Enter your MySQL Root users password where applicable.
  • If either the Slave IO or the Slave SQL stops running, the check will return CRITICAL in Nagios.
  • Does not require SUDO action, run straight from nrpe.cfg





Posted on  - By

To make use of the JSONB features implemented in 9.4, it may be required that you upgrade your existing PgSQL 9.3 cluster to 9.4+. I cover the basics on how to perform an in-place upgrade.


  • 1. Add the PostgreSQL repo to apt:

    echo "deb http://apt.postgresql.org/pub/repos/apt/ utopic-pgdg main" > /etc/apt/sources.list.d/pgdg.list


  • 2. Install the repo’s key:

    wget -q -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add


  • 3. Update apt sources and install postgresql-9.4:

    apt-get update && apt-get install postgresql-9.4 && pg_lsclusters


  • 4. You will now have two pgsql clusters, your existing 9.3 one and the new default 9.4 one. We don’t need the 9.4 one, so we can drop it:

    pg_dropcluster --stop 9.4 main && pg_lsclusters


  • 5. Use pg_upgradecluster to perform an in-place upgrade of your 9.3 cluster:

    pg_upgradecluster 9.3 main && pg_lsclusters


  • 6. You will be left with a single, upgraded 9.4 cluster.






Posted on  - By

Utilising a master/slave (hot-standby) setup to provide a resilience layer at database level can be easy. The following assumes you have 2 PgSQL hosts at 10.10.50.1 and 10.10.50.2, both running Ubuntu 14.04 LTS and PostgreSQL 9.4 (9.4.5).

  • 1. On the master 10.10.50.1, edit the following in postgresql.conf:

    listen_addresses = '*'
    wal_level = hot_standby
    max_wal_senders = 3

    listen_addresses can also be scoped down to single or multiple server bound IP addresses, for added security/best practice

    wal_level defines what type of data, and how much of it is written to/stored in the Write Ahead Log. Setting to hot_standby tells PgSQL to write all the data that would have been written with “archive” mode, plus the data needed to reconstruct the status of running transactions.

    max_wal_senders defines the number of process to use (max) to send replication data to the slave. This can be fine-tuned for your DB load and network capacity.


  • 2. On the master, 10.10.50.1, edit the following in pg_hba.conf:

    host	replication		all		10.10.50.2/32		trust

    This entry allows the slave to communicate back to the master, but only for replication based tasks.

  • 3. On the slave, 10.10.50.2, edit the following in postgresql.conf:

    hot_standby = on

  • 4. On the slave, 10.10.50.2, create a new configuration file named “recovery.conf” and add the following:

    standby_mode = 'on'
    primary_conninfo = 'host=10.10.50.1'

  • 5. We now need to sync the DB data from the master to the slave, so they can begin at the same point. Your mileage may vary with this, but the rsync command that would work in this scenario is the following, note the excludes, these are important, don’t sync those:

    rsync -av -e "ssh -p 22" --exclude pg_xlog --exclude postgresql.conf /var/lib/postgresql/9.4/main/* root@10.10.50.2:/var/lib/postgresql/9.4/main/

  • 6. Once the sync has completed, start the Slave DB, once up, start the Master DB. Replication will now be in effect.






Posted on  - By

When it comes to dedicated servers, choosing an Operating System to suit your needs is crucial. Here at VooServers we offer a variety of custom setups, but by far the most common requests at setup time are for the “Famous Five”. That is, Windows Server 2008 (R2), Windows Server 2012 (R2), CentOS (6.x/7.x), Debian and Ubuntu. This quick rundown will be just the resource you need if you’re on the fence about one or the other.

Linux

Linux Logo
From the five OS’s mentioned, 3 of them are Linux based (or at least *nix core based). Linux OS installs are by far the most popular for server deployments and it’s easy to see why. Low resource overheads, unparalleled stability and vastly reduced licensing costs (often NONE). For the sake of these overviews, we’ll be looking at the non-GUI, server-core installations.


CentOS

The “go-to” Linux OS for many. Praised for its simplicity, this Linux OS is a popular choice for the fact it is built around, and entirely based on, RHEL (Red Hat Enterprise Linux). It is almost 100% binary compatible with the RHEL Cores. That fact alone opens up a lot of flexibility with packages and software installs, but negates the need for a costly RHN (Red Hat Network) update/support license.

Stability/Server Features: 3 out 0f 5
Ease of Use: 3 out of 5


Debian

Debian Logo
Another very popular OS choice. Debian embodies the epitome of server stability. And has been a prominent Server OS for nearly 20 years. This unparalleled stability is traded off with usability, and Debian is often criticised for being slightly too cumbersome. It’s often compared negatively to RHEL, but this is typically by users who are not fully familiar with Debian’s operations. Another point of note, as of the Debian Squeeze release around 2011, all software packages bundled and installed with the OS are free software, prior to this, certain packages required extra purchases.

Stability/Server Features: 4 out of 5
Ease of Use: 2 and a half out 0f 5


Ubuntu

Ubuntu is the modern spawn from a collaboration between the Debian Linux Kernels, and a for-profit organization named Canonical. As a server OS it is reliable, but unnecessary packages to aid user experience often become the undoing to this stability. Certain aspects of the OS, such as the installer, how the OS implements ‘sudo’, and its package manager mean that Ubuntu is remarkably easy to use – at least compared to its Debian father. Users of Ubuntu often compliment the level of support given by the technical communities, with it being such a new and upcoming OS, the interest and activity level is high.

Stability/Server Features: 3 out of 5
Ease of Use: 4 out of 5


Windows

Windows Logo
The remaining two Operating Systems are Windows based. In many applications, there’s simply no alternative than to have a globally recognisable and usable GUI, product support at the touch of a button and the most widely developed-for software system in the world. Of course, the trade-off here is cost. Licensing is a serious consideration when planning out your deployment. As much as you’d love the ease of an MS GUI, can your endeavour justify the rather large cost of Windows Licensing?

Windows Server 2008 R2

The “go-to” choice of many. Core in the industry for many years, the support of 2008 R2 has been hard for Microsoft to shift over onto the 2012 range of Operating Systems. Built on a Windows 7 Kernel and Core, it’s no nonsense GUI and rock solid stability are a force to be reckoned with in the server world. The only problem is, these days, there are some technical limitations that you should consider… 2008 R2 caps Physical Memory at 1Tb, and if you’re using it as a Virtualization Host, the VHD file format for virtual disks is capped at 2Tb. If operating in a Cluster, you can only have 16 2008 R2 Nodes. If you’re planning a large scale deployment, or Virtualized Applications that plan to use a lot of disk space, these should be taken into account, and traded off against 2008’s massive support base, bug free nature and no-frills “just works” GUI.

Stability/Server Features: 3 out of 5
Ease of Use: 4 out of 5


Windows Server 2012 R2

2012 R2 is built on a Windows 8 Core (or rather an 8.1 Core). Released late 2012 it addresses many of the limitations imposed by 2008 R2, Physical Memory for example, is now capped at 4Tb. Hyper-V now uses the VHDX file format, increasing the disk limit to a whopping 64Tb. And for you clustred-computing guys out there, you can have up to 64 2012 R2 nodes with a max of 8,000 VM’s! The downside, in our opinion, is that 2012 R2 has unfortunately ported across most of the 8.1 GUI. That is, the metro interface, app screen, and start button. In a server environment, when precision is key, and fluidity of tasks dictates your daily workflow, I can see no reason to have a full featured metro interface on a server. Even areas such as Task Manager, and Control Panel, are greatly cumbersome to use in a rush.

Windows Server 2016 is soon to be released (Technical Preview already under testing). This is built on a Windows 10 Core, and will address the interface issues inherited by 2012.


Stability/Server Features: 4 out of 5
Ease of Use: 3 out of 5






Posted on  - By

Windows 10, the source of much controversy over the last 6months or so, is finally upon us, and has been for a solid month or two now. Released officially on June 29th 2015, the first few machines of users who opted in to the free upgrade process began to take the plunge. I take a look at 10’s myriad of positives, pitfalls and cast a view point on whether Microsoft are onto a winner or not…

Windows 10 Logo
The Good

Task View.
Yes, the addition of a “Mac like” Exposé/Mission Control window peek feature. This one I like a lot, a quick tap of Windows Key + Tab will spring your 10 desktop into life and display each open application in a handy easy to view minified group view. This scales seamlessly across multiple physical monitors too, on my office station I currently have 3 monitors, each heavily populated with application windows. Pro-Tip: Mapping the keyboard strokes to a spare macro button on your mouse really speeds this up.


The Start Menu.
It’s back! Ok now hear me out on this one. A lot of people swear by the metro interface of 8 and 8.1, and were early adopters from the first versions of Windows 8. The claims were that it was much quicker to find certain settings areas or applications by using the metro interfaces search functions. I agree, it may have been quicker to find, but having Metro shut off your view of any open apps and your task bar, on all monitors, whilst it did this, was such a massive hindrance to your workflow in a business environment, that it killed any hint of productivity that you might have had going at the time. And don’t get me started about the location of the shutdown/Reset buttons! For me, the return of a semi-traditional start menu layout, which doesn’t disrupt your desktop view when you open it, was critical for the success of Windows 10. Kudos to Microsoft on the integration of Metro Tiles into an otherwise unused space.


Windows 10 Desktop
Boot-up Time.
Restarts in windows are a necessity sometimes, whether it be to apply those pesky updates, or simply because your work machine that’s been up for 162days is starting to bog down a little bit… Getting back up and into your desktop is better if it happens as quickly as it can. Again taking my fairly solid work machine as a benchmark, I’ve timed this using extremely high tech scientific instruments (A Samsung Galaxy S5) to a fraction over 9 seconds. This is with an enterprise level Intel SSD as boot, and only timed to the login prompt (as our domain logon would add precious unfair seconds). So to summarise, speedy, yes, good.



The Bad

Windows Updates.
At the time of writing this piece, I’m going to go ahead and give Microsoft the benefit of the doubt and credit them with the assumption that Windows Update, is simply not finished. Firstly, Microsoft have found the need to ‘hide’ Updates in the most illogical place, and to make matters worse, have left no breadcrumb to where they’ve put it. Naturally, you’d type “Update” into the search box, nope. Nothing. Okay, well it’s in Control Panel usually, so I’ll head there, nope. Nothing. Hmm. Turns out it’s hidden in the “All Settings” section of the notification panel that pops out of the right hand side of the screen. Why? And furthermore, why didn’t it come up in the search results for “Update”. Poor usability. Secondly, once you’ve managed to find and Launch Windows Updates, you’re greeted with a stripped down Metro App style interface, personal gripes aside, there simply isn’t the level of control in this interface that there needs to be. You have 200 updates to apply to a new install system? Ok, that’s fine, but you can’t de-select a single one of them. You have to install them all, and then go in and uninstall what you didn’t want afterwards from Programs & Features. Not cool. The final gripe about updates (and yes I’m aware a lot of this can be affected in GPO’s etc.) forced reboots at off peak times, or scheduled reboots within the next 4 days. Nope. No thank you. You do not have permission to reboot my machine at 3.30am, ever. And forcing me to pick a time in the next 4 days ONLY to force a reboot gets you a free ticket on the train to disabling the Windows Update Service.


Windows 10 Updates
Privacy.
My data is mine, which may seem like a silly statement, but it seems it needs to be re-iterated again and again. It’s mine, all of it, and I don’t want any of it being needlessly transmitted back to Redmond HQ. By default, if you don’t delve into the hidden options sections in the 10 install process, you’ll be sharing a lot more than the odd tracking cookie from a dodgy website with our pals over in the marketing team at Microsoft. Speech input, pen input, calendar details, contact information, geographic location and raw URL browser history are all openly shared and transmitted back to Microsoft at the drop of a hat. Along with the staggering misuse of trust of openly sharing your unique advertising ID with 3rd parties, you’d be excused for thinking that someone was pulling your leg? Nope. All of this is enabled by default in the Windows 10 installation procedure. You can disable it, but you’ll need super sharp eyes to catch the “Customize Settings” link at the bottom of one of the non-descript install screens. The good news is that you can turn everything off within the OS as well, so don’t fret too much if you did miss it. This sort of sharing of information is ok, if you want to help Microsoft improve its services, and you don’t think the data you’re transmitting is particularly security critical. For an enterprise user, working with customers’ entire company infrastructures daily. Leaking this sort of data is a crippling security flaw. These sorts of things should be offered as a default-disabled option, not enabled and hidden from the non tech savvy users.



The Summary

As a hard-core enterprise user of Windows 7, I was dead against adoption of the previous efforts from Microsoft. 8 and 8.1 fell very short of what they were meant to be. To me it seemed like they used it simply as an exercise in practicing how to get the Metro Interface to work on the desktop environment. They were slow, clunky, poorly thought out, and just a downright chore to use on a daily basis. 10 has taken a fresh look at Metro and has condensed its best bits into the smallest impacting footprint they can in the newly restored 10 Start Menu. Taking myself as a benchmark, I believe this will win over a large number of the hard-core 7 supporters, as it has me. Coupled with the fancy new Multiple Desktops, Task View, Notification Panel and many other features, I do truly think that Microsoft have the basis of an OS that will become the new go-to/de facto standard of Enterprise desktop installations. That being said, I do think they are still missing a few tricks. Windows Update, is simply not in a finished state, and needs a complete overhaul. The mismatch of where some settings applications are, and why they’re not in Control Panel (EVERYTHING should be in control panel, no matter where else it is) is a mystery to me, and again smacks of “unfinished”-ness.

As we’re only a few months into 10, I’m willing to give it the benefit of the doubt and state that, YES, in fact Windows 10 could very well be a game changer. Certainly if the game is to win over the old-school 7 users, and tempt across the lazy 8 and 8.1 users. Windows 10 has great promise, Microsoft just need to finish it 😉






Posted on  - By

If you or your company provide virtual servers within a Xen Virtualisation Environment, then it’s probably safe to say that you’ve run into Network overuse or misuse in the past, on one or more of your Hyper-Visors. Troubleshooting this and finding the VM responsible can be a tricky one, as many control panels don’t report live virtual interface data. (And even if they did, you can’t connect to it during a large scale attack!).

We’ve compiled a few of the simplest, and most direct ways of pinpointing exactly which pesky VM is the cause. The only thing you need to have installed? Sysstat.

Network Misuse or Overuse (Inbound or Outbound Attacks)

If your network graphs alert you to network spikes, or suspicious activity, such as either bursting or sustained high PPS (packets per second) then you could have an attack on your hands. With budget VM’s being so cheap and attainable, and instant deployment pretty much the norm, it makes sense for malicious 3rd parties to use them as staging platforms to participate in traditional traffic based DDoS and other common reflection based attacks.

If the attack is large enough, you will struggle to connect to your Hyper-Visor over the network. So physical access may be required for this one.

The following command will give you a solid overview of the network use, per interface. This includes the virtual interfaces bound to your VM’s:

sar –n DEV 1 3

Explained:

This command uses sar. Sar is a handy tool that collates and displays various pieces of data from system activity counters, and can also be used to display in more useful ways, the contents of binary data files containing system performance history

  • -n – Reports the network statistics
  • DEV – Targets specifically the network devices
  • 1 – Interview between re-polling sar
  • 3 – Number of times to poll sar before averaging the results
Running the command should garner you something along the lines of this:

Troubleshooting Xen Virtual Machine Network

The above is largely normal, if you excuse the odd marginally high traffic level. The first 4 columns are what should be of interest to you, receive and transmit packets per second, and receive and transmit kB/s. If a VM is attacking, or being attacked, these values will usually all be in the 100,000’s. It will become hard to read the specific values, as the columns merge together.

The virtual interfaces are nicely named with the VM ID included. So this immediately tells you the unfortunate target or the unscrupulous attacker. However, you still don’t know the IP Address. And with the attacks ongoing, you still can’t log in to the friendly web GUI to suspend the VM.

The following command can help. There may also be times when you simply don’t want to shut the VM down, but you do want to stop the attacks at network level Lets assume you want the IP of vm1686.

find / -name vm1686.cfg -exec grep “vif” {} \;

Explained:

This uses a typical find command, but is combined with the –exec switch for added functionality.

  • / – Start search in the root
  • -name – Search by full file name
  • vmXXX.cfg – Substitute the VM ID into here
  • -exec grep “vif” {} /; – This executes a simple grep command on every result find, and places the filename of the found result after the grep parameter.
Tip: You could even go further with this and awk it to cut down on the un-needed information |awk ‘{ print $3 }’

The output of the above should give you something that looks like this:

Troubleshooting Xen Virtual Machine Network

From there, you can block/blackhole/nullroute the IP as you please, without having to shutdown the VM, and without ever needing to access your Hyper-Visors web GUI.





Older Posts


© VooServers Ltd 2016 - All Rights Reserved
Company No. 05598156