Wednesday, October 26, 2016

Elasticsearch automatic configure listening address on Oracle Cloud

In an earlier post we showed how we can install Elasticsearch on Oracle Linux within the Oracle Cloud for testing purposes. In this post we showed that using a default installation will make it so that your Elasticsearch node will listen to the IPv4 and IPv6 for your localhost. Even though this is fine for a single test system where you like to explore locally some of the options of Elasticsearch it is not making much sense if you want to integrate this with other systems. In this case you will have to ensure that you are able to access Elasticsearch  from external.

As we are running our Elasticsearch node on Oracle Linux on the Oracle Public IaaS cloud we can use the meta-data API as one of the options to retrieve the address we want use as the listening address and configure this in Elasticsearch.

To ensure Elasticsearch is listening on another address than the localhost address we have to make sure the configuration value in /etc/elasticsearch/elasticsearch.yml is set to the address we want it to listen on. The below script is an example of a script snippet you can use in a wider deployment script:

#  This is script is used to configure the settings for 
#  elasticsearch to ensure it is listening to the IP set for local-ipv4
#  in the Oracle public IaaS cloud on Oracle Linux. We will retrieve
#  the local IPV4 adress and add it to the elasticsearch configuration 
#  file /etc/elasticsearch/elasticsearch.yml after which we restart
#  elasticsearch to ensure the setting new setting is active. This will
#  ensure that elasticsearch is now accessible outside localhost

echo " $(curl -s" >> /etc/elasticsearch/elasticsearch.yml
service elasticsearch restart

For example, you can add this to the script used to install Elasticsearch on Oracle Linux. This script can also be found on github. As is the case for the above snippet which is available on Github on this location

Tuesday, October 25, 2016

Oracle Linux and understanding Oracle Cloud IP's

When working with the Oracle Public Cloud the first time and trying to bind services on your Oracle Linux instance to the public internet you might be a bit confused in first instance. If you look from a cloud portal point of view you will find two IP addresses, One public IP and one private IP. when you connect to your Linux machine remotley via SSH you will use the public IP however we you check the instance you will find only a single NIC containing the private IP.

As an example; the below screenshot from the cloud portal shows both the internal and the eternal IP;

When connected to the Oracle Linux instance we can check the IP's and we will notice only the private IP is available:

[opc@testbox08 ~]$ ifconfig
eth0      Link encap:Ethernet  HWaddr C6:B0:36:23:FE:CE
          inet addr:  Bcast:  Mask:
          inet6 addr: fe80::c4b0:36ff:fe23:fece/64 Scope:Link
          RX packets:2300612 errors:0 dropped:2 overruns:0 frame:0
          TX packets:643213 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:691364657 (659.3 MiB)  TX bytes:144613834 (137.9 MiB)

lo        Link encap:Local Loopback
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:2551 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2551 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:40849225 (38.9 MiB)  TX bytes:40849225 (38.9 MiB)

[opc@testbox08 ~]$

Oracle has a network translation on the edge of the network that will translate the external IP to the internal IP and will tunnel all traffic for the external IP to the internal IP while going through a firewall. This means you can control which traffic on the external IP will actually end up being tunneled to the internal IP address.

Even though this sounds like something you do not have to worry about to much, for some services it is vital to understand what the external IP is and not only what the internal IP is.

Monday, October 24, 2016

Oracle Cloud - persistent IaaS storage

In a recent post where I discussed working with orchestration in the Oracle IaaS cloud I made a warning that if you stopped the orchestration that also created your storage you where in danger of wiping out your storage all together for that instance.

In all reality, someone within Oracle did take that with him and in the latest version of the Oracle cloud this issue / feature has been changed. The new version shows a new option when you create a new instance. The below screenshot shows you that you have a new option to ensure that your storage will be persistent even if you stop your instance orchestration.

When you have a machine you want to make sure the storage will not be deleted when stopping your orchestration you will have to keep the "delete on termination" unchecked.

In all honesty, this is a great improvement on the way the Oracle IaaS cloud service is handling storage.

Friday, October 21, 2016

Oracle Linux : sending mail with Sendmail

There can be many reasons why you need to send mail from your Linux host to some mail account. For Example, you have an application that needs to send out mail to end users, in those cases you will use a central SMTP mail relay server within your corporate IT footprint. However, in some cases you want to have scripting that makes use of a local SMTP instance that will send the mail for you. This can be direct to the end user or using a SMTP relay server.

In cases you want to have your local Linux machine to send out the messages directly to the recipient you will have to ensure that (A) your machine is allowed to make the connection outside of your firewall to the recipient mail server and (B) you will have to make sure you have a local MTA (Mail Transfer Agent) in place. The best known MTA’s are Sendmail and Postfix. We will use Sendmail as an example while showing how to send mails from an Oracle Linux machine to a gmail account (or whatever account you require) by using simple bash commands and scripting.

Install Sendmail on Oracle Linux
Installing Sendmail is most likely the most easy step in the entire blogpost. You can install Sendmail by making use of the default Oracle Linux YUM repositories. Install Sendmail is done with the below command. You will notice we install Sendmail and sendmail-cf. Sendmail-cf is used to make your life much more easy when configuring and reconfiguring Sendmail.

yum install sendmail sendmail-cf

For some reason Sendmail migt be giving you some strange errors every now and then right after you installed it and start using it. A good practice to ensure everything is ready to go is to stop and start the sendmail service again as shown in the example below.

[root@testbox08 log]#
[root@testbox08 log]# service sendmail status
sendmail (pid  968) is running...
sm-client (pid  977) is running...
[root@testbox08 log]#
[root@testbox08 log]# service sendmail stop
Shutting down sm-client:                                   [  OK  ]
Shutting down sendmail:                                    [  OK  ]
[root@testbox08 log]#
[root@testbox08 log]# service sendmail start
Starting sendmail:                                         [  OK  ]
Starting sm-client:                                        [  OK  ]
[root@testbox08 log]# service sendmail status
sendmail (pid  1139) is running...
sm-client (pid  1148) is running...
[root@testbox08 log]#
[root@testbox08 log]#

After this your sendmail installation on Oracle Linux should be ready to go and you should be able to send out mails. We can easy test this by sending a test message.

Sending your first mail with sendmail
Sending mail with Sendmail is relative easy and you can make it even easier by ensuring your entire message is within a single file. As an example, I created the file /tmp/mailtest.txt with the following content:

Subject: this is a test mail

this is the content of the test mail

This would mean the mail is send to my gmail account, the subject should be “this is a test mail” and the body of the mail will show ” this is the content of the test mail”. Sending this specific mail (file) can be done by executing the below command:

[root@testbox08 tmp]# sendmail -t /tmp/mailtest.txt

However, a quicker way of ensuring your message is processed is removing the “To:” part and using a command like shown below:

[root@testbox08 log]# sendmail < /tmp/mailtest.txt

The below screenshot shows that the mail has arrived in the mailbox, as expected. You can also see it has gotten the name of the account and the fully qualified hostname from the box I used to send the mail from. In this case this shows a Linux host located in the Oracle Public cloud.

Making your reply address look better
The above mail looks a bit crude and unformulated. Not the mail you would expect to receive as an end user, and especially not as a customer for example. Meaning, we have to make sure the mail that is received by the recipient is formatted and in a better way.

The first thing we like to repair is the name of the sending party. We would, as an example, have the name shown as "customer service" and the reply address should become To do so we add a "Reply" line to the /tmp/mailtest.txt file which looks like:

From: customer service

Due to the formating it is not showing as it is rather showing in the way we commonly see and as is shown in the screenshot below:

Giving the mail priority
Now, as this is a mail from customer service informing your customer that his flight has been canceld  it might be appropriate to this mail a priority flag.

Doing more with headers
In essence you can define every mail header you like and which is understandable and which is allowed. To get an understanding of the type of headers that you can use and which are common you can have a look at RFC 2076 "Common Internet Message Headers".

Sending HTML formatted mail
It is quite common to use HTML to format emails. Ensuring you can send your email in a HTML formatted manner requires that you have the right headers in your email and you format your message in the appropriate HTML code (please review the example on github).

An important thing to remember is that not everyone is able to read HTML. For this it is good to use the "content-Type: multipart/alternative;" header in combination with the "Content-Type: text/html; charset=UTF-8". This will allow you to make a distinct between HTML formatted mail and non-HTML formatted mail.

All the examples below can be found in the example mail file "/tmp/mailtest.txt" which is available on github.

Deploying Elasticsearch test node on Oracle Linux

In some cases you want to have a certain type of service running on your Oracle Linux instance just for testing and playing purposes. In my case am experimenting with Elasticsearch from Elastic and I need to install single node Elasticsearch instances every now and than on a new and fresh Oracle Linux instance. Even though it is not that much work it makes more sense to build a script for this.

The below script will install Elasticsearch on Oracle Linux 6 3.8.13- by simply running the script. Elasticsearch is a search engine based on Lucene. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java and is released as open source under the terms of the Apache License. Elasticsearch is the most popular enterprise search engine followed by Apache Solr, also based on Lucene.


function runMain {

function packageInstalled () {
  numberOfPackages=`yum list installed | grep $1 | wc -l`
  if [ "$numberOfPackages" -gt "0" ];
       echo "true"
       echo "false"

function installJava {
  javaInstalled=`packageInstalled java-1.8.0-openjdk`
  if [ "$javaInstalled" = "true" ];
       echo "java is already installed"
      echo "installing java"
      yum -y install java-1.8.0-openjdk

function installElasticsearch {
  elasticsearcInstalled=`packageInstalled elasticsearch`
    if [ "$elasticsearcInstalled" = "true" ];
       echo "elasticsearch is already installed"
       echo "importing elastic GPG key"
       rpm --import

       echo "adding elastic repository to yum"
       echo "" >> /etc/yum.repos.d/public-yum-ol6.repo
       echo "[elastic]" >> /etc/yum.repos.d/public-yum-ol6.repo
       echo "name=Elasticsearch repository for 2.x packages" >> /etc/yum.repos.d/public-yum-ol6.repo
       echo "baseurl=" >> /etc/yum.repos.d/public-yum-ol6.repo
       echo "gpgcheck=1" >> /etc/yum.repos.d/public-yum-ol6.repo
       echo "gpgkey=" >> /etc/yum.repos.d/public-yum-ol6.repo
       echo "enabled=1" >> /etc/yum.repos.d/public-yum-ol6.repo

       echo "installing elasticsearch"
       yum -y install elasticsearch

function startElasticsearch {
  echo "starting elasticsearch"
  service elasticsearch start


The script has been tested on Oracle Linux 6 running on the Oracle Public Cloud. After completing the script you can see that Elasticsearch is running and listening on port 9200 on both IPv4 and IPv6 by executing the below command:

[root@testbox08 init.d]#
[root@testbox08 init.d]# netstat -ln | grep 9200
tcp        0      0 ::ffff:       :::*                        LISTEN
tcp        0      0 ::1:9200                    :::*                        LISTEN
[root@testbox08 init.d]#
[root@testbox08 init.d]#

To test if Elasticsearch indeed is working and responding you can do a curl against port 9200 to see the default result from Elasticsearch after a vanilla installation.

[root@testbox08 init.d]#
[root@testbox08 init.d]# curl http://localhost:9200/
  "name" : "King Bedlam",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "5fd7bOG-RP6MrTbI3denuA",
  "version" : {
    "number" : "2.4.1",
    "build_hash" : "c67dc32e24162035d18d6fe1e952c4cbcbe79d16",
    "build_timestamp" : "2016-09-27T18:57:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.2"
  "tagline" : "You Know, for Search"
[root@testbox08 init.d]#
[root@testbox08 init.d]#

Oracle Linux - Checking installed package

In some cases you want to verify if a package is installed on your Oracle Linux instance within a bash script. You query what is installed by using the "list installed" option for the yum command. However, this is giving you a more human readable result and not something that works easy in a the flow of a script. In essence you would like to have a boolean value returned to tell you if a package is installed or not on your Oracle Linux instance.

The below code example is a bash script that will exactly do so. Within the example you see the packageInstalled function which takes a variable for the package name you are looking for. The result will be true or false.


function packageInstalled () {
     numberOfPackages=`yum list installed | grep $1 | wc -l`
     if [ "$numberOfPackages" -gt "0" ];
           echo "true"
         echo "false"

packageInstalled wget

In the example we are checking the installation of the wget package. You can change wget for whatever you need to be sure is installed. Using this building block function can help you to write a more complex script for installing packages when needed. 

Check Exadata key InfiniBand fabric error counters via Oracle Linux

Checking key InfiniBand fabric error counters on your Exadata machine is good practice and can be done from the Linux operating system. This check is part of the exachk. The exachk should actually be run regularaly to ensure your Oracle Engineered system is in a good shape. However, a lot of people also tend to ensure that some checks are done more regular. As part of a list of checks that can be done more regular is checking on key InfiniBand fabric error counters.

For checking the key InfiniBand fabric error counters on your Oracle Exadata you can use the exachk report, however, you can also do this directly from the Oracle Linux operating system. The code used for this check is shown in the details pages of the exchck report (and shown below).

if [[ -d /proc/xen && ! -f /proc/xen/capabilities ]]
  echo -e "\nThis check will not run in a user domain of a virtualized environment.  Execute this check in the management domain.\n"
    RAW_DATA=$( | egrep 'Recover.*SymbolError|SymbolError.*Recover|SymbolError|LinkDowned|RcvErrors|RcvRemotePhys|LinkIntegrityErrors');
  if [ -z "$RAW_DATA" ]
    echo -e "SUCCESS: Key InfiniBand fabric error counters were not found"
    echo -e "WARNING: Key InfiniBand fabric error counters were found\n\nCounters Found:\n";
    echo -e "$RAW_DATA";

You can easily take this part of the code and put this into a custom bash script to be executed by your Exadata administrators. However, if you do implement a large set of custom build checks using Oracle Enterprise Manager you can also use the above code to build a custom check.

When all results are good you should receive the message "SUCCESS: Key InfiniBand fabric error counters were not found". When building a custom OEM check it might be better to change this into a numeric value. For example, all OK should represent 0.

In case there are errors you will receive a message like the one below shown as an example:

WARNING: Key InfiniBand fabric error counters were found

Counters Found:

   GUID 0x21286ccbaea0a0 port ALL: [SymbolErrorCounter == 2]
   GUID 0x21286ccbaea0a0 port 34: [SymbolErrorCounter == 2]

Using SQlite on Oracle Linux

Most people who are working with Oracle technology and who are in need of a database to store information will almost by default think about using an Oracle Database. However, even though the Oracle database is amazing, it is not a fit for all situations. If you are in need to just store some information locally or for a very small application and you do not worry too much about things like performance you might want to turn to other solutions.

In cases where you need something just a bit more smart and easy to use than flat file storage or JSON/XML files you can parse and a full Oracle database is overkill you might want to look at SQLite. SQLite is an open source software library that implements a self-contained (single file), zero-configuration, transactional SQL database engine. SQLite supports multi-user access, but only a single user can update the database at a time. It is largely an "untyped" system and all data is stored as strings.

SQLite is by default shipped with Oracle Linux 7 and is widely used in scripting whenever a semi-smart storage of data is needed. Understanding SQLite and investing some time into it well worth it if you regularly develop code and scripting for your Oracle Linux systems or for other purposes.

Interacting with SQLite
The easiest way to explore SQLite is using the SQLite command line. When on your Linux shell you can use the sqlite3 command to open the SQLite command line. The below example shows how we open a new database, create a table, write some data to the table, query it and after that exit. As soon as you open a new database that does not exist and write something to this database the file will be created on the filesystem.

[root@testbox08 tmp]#
[root@testbox08 tmp]# ls showcase.db
ls: cannot access showcase.db: No such file or directory
[root@testbox08 tmp]#
[root@testbox08 tmp]# sqlite3 showcase.db
SQLite version 3.6.20
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table objecttracker (object, version, object_id INTEGER);
sqlite> insert into objecttracker values ('api/getNewProduct','1.3',10);
sqlite> insert into objecttracker values ('api/getProductPrice','1.3',20);
sqlite> select * from objecttracker;
sqlite> .exit
[root@testbox08 tmp]
[root@testbox08 tmp]# ls showcase.db
[root@testbox08 tmp]#

As you can see from the above example we do not explicitly create the file showcase.db it is simply created the moment we start writing something to the database. In our case the first write is the creation of the table objecttracker.

Even though knowing your way around the SQLite command line is something you have to understand the more interesting part is using it in a programmatic manner.

Coding against SQLite
There are many ways you can interact and code against SQLite, a large number of languages provide a standard way of interacting with SQLite. However, if you simply want to interact with it using a bash script at your Oracle Linux instance you can very well do so.

Working from bash with SQLite is failry simple if you understand the SQLite command line. You can simply wrap all commands together with the command used to call the SQLite database. As an example, if we want to query the table we just created and have the output we can use the below:

[root@testbox08 tmp]#
[root@testbox08 tmp]# sqlite3 showcase.db "select * from objecttracker;"
[root@testbox08 tmp]#
[root@testbox08 tmp]#

As you can see we now have the exact same output as that we got when executing the select statement in the SQLite command line.

This means you can use the above way of executing a SQLite command in a bash script and parse the results in the bash code for future use. In general SQLite provides you a great way to store data in a database without the need to install a full fletched database. In a lot of (small) cases a full database such as the Oracle database is an overkill as you only want to store some small sets of data and retrieve it using SQL statements. 

Ensuring ILOM power up on Exadata with IPMI

Like it or not, power interruptions are still a thread to servers. Even though servers ship with dual power supplies and if done correctly they should be plugged into different power supplies within the datacenter a power outage can still happen. Even tough datacenters should have backup power and provide uninterrupted power to your machines it still might happen. To ensure all your systems are behaving in the right way when power comes back on you can make use of some settings within the IPMI configuration.

The Intelligent Platform Management Interface (IPMI) is a set of computer interface specifications for an autonomous computer subsystem that provides management and monitoring capabilities independently of the host system's CPU, firmware (BIOS or UEFI) and operating system. IPMI defines a set of interfaces used by system administrators for out-of-band management of computer systems and monitoring of their operation. For example, IPMI provides a way to manage a computer that may be powered off or otherwise unresponsive by using a network connection to the hardware rather than to an operating system or login shell.

Oracle Servers have IPMI on board and it is good practice to ensure you make use of the HOST_LAST_POWER_STATE information to ensure your server boots directly when power comes back online or is not booting up when the server was already down during the power outage.

To verify the ILOM power up configuration, as the root userid enter the following command on each database and storage server:

if [ -x /usr/bin/ipmitool ]
ipmitool sunoem cli force "show /SP/policy" | grep -i power
/opt/ipmitool/bin/ipmitool sunoem cli force "show /SP/policy" | grep -i power

When running this on an Exadata the output varies by Exadata software version and should be similar to:

Exadata software version or higher:

Exadata software version or lower:

If the output is not as expected you will have to ensure make the settings correct so your Exadata machine boots directly after the power is restored. 

Friday, October 14, 2016

Using osquery in Oracle Linux

Recently the guys at facebook released an internal project as opensource code. Now you can make use of some of the internal solutions facebook is using to keep track and analyse their compute nodes in the facebook datacenter. Osquery allows you to easily ask questions about your Linux, Windows, and OS X infrastructure. Whether your goal is intrusion detection, infrastructure reliability, or compliance, osquery gives you the ability to empower and inform a broad set of organizations within your company.

What osquery provides is a collector that on a scheduled basis will analyse your operating system and store this information in a sqlite database local on your system. In essence osquery is an easily configurable and extensible framework that will do the majority of collection tasks for you. What makes it a great product is that it is all stored in sqlite and that enables you to use standard SQL code to ask questions about your system.

After a headsup from Oracle Linux product teams about the fact that facebook released this as opensource I installed it on an Oracle Linux instance to investigate the usability of osquery.

Installing osquery
Installation is quite straightforward. A RPM is provided which installs without any issue on Oracle Linux 6. Below is an example of downloading and installing osquery on an Oracle Linux 6 instance.

[root@testbox08 ~]#
[root@testbox08 ~]# wget "" -b
Continuing in background, pid 28491.
Output will be written to “wget-log”.
[root@testbox08 ~]#
[root@testbox08 ~]# ls -rtl osq*.rpm
-rw-r--r-- 1 root root 13671146 Oct  4 17:13 osquery-2.0.0.rpm
[root@testbox08 ~]# rpm -ivh osquery-2.0.0.rpm
warning: osquery-2.0.0.rpm: Header V4 RSA/SHA256 Signature, key ID c9d8b80b: NOKEY
Preparing...                ########################################### [100%]
   1:osquery                ########################################### [100%]
[root@testbox08 ~]#
[root@testbox08 ~]#

When you check you will notice that osquery will not start by default and that some manual actions are required to get it started. In essence this is due to the fact that no default configuration is provided during the installation. To enable the collector (daemon) to start it will look for the configuration file /etc/osquery/osquery.conf to be available. This is not a file that is part of the RPM installation. This will result in the below warning when you try to start the osquery daemon;

[root@testbox08 init.d]#
[root@testbox08 init.d]# ./osqueryd start
No config file found at /etc/osquery/osquery.conf
Additionally, no flags file or config override found at /etc/osquery/osquery.flags
See '/usr/share/osquery/osquery.example.conf' for an example config.
[root@testbox08 init.d]#

Without going into the details of how to configure osquery and tune it for you specific installation you can start to test osquery by simply using the default example configuration file.

[root@testbox08 osquery]#
[root@testbox08 osquery]# cp /usr/share/osquery/osquery.example.conf /etc/osquery/osquery.conf
[root@testbox08 osquery]# cd /etc/init.d
[root@testbox08 init.d]# ./osqueryd start
[root@testbox08 init.d]# ./osqueryd status
osqueryd is already running: 28514
[root@testbox08 init.d]#
[root@testbox08 osquery]#

As you can see, we now have the osquery deamon osqueryd running under PID 28514. As it is a collector it is good to wait for a couple of seconds to ensure the collector makes its first collection and stores this in the sqlite database. However, as soon as it has done so you should be able to get the first results stored in your database and you should be able to query the results for data.

To make life more easy, you can use the below script to install osquery in a single go:

wget "" -O /tmp/osquery.rpm
rpm -ivh /tmp/osquery.rpm
rm -f /tmp/osquery.rpm
cp /usr/share/osquery/osquery.example.conf /etc/osquery/osquery.conf
./etc/init.d/osqueryd start

Using osqueryi
The main way to interact with the osquery data is using osqueryi which is located at /usr/bin/osqueryi . Which means that if you execute osqueryi you will be presented a command line interface you can use to query the data collected by the osqueryd collector. 

[root@testbox08 /]#
[root@testbox08 /]# osqueryi
osquery - being built, with love, at Facebook
Using a virtual database. Need help, type '.help'

As an example you can query which pci devices are present with a single SQL query as shown below:

select * from pci_devices;
| pci_slot     | pci_class | driver           | vendor | vendor_id | model | model_id |
| 0000:00:00.0 |           |                  |        | 8086      |       | 1237     |
| 0000:00:01.0 |           |                  |        | 8086      |       | 7000     |
| 0000:00:01.1 |           | ata_piix         |        | 8086      |       | 7010     |
| 0000:00:01.3 |           |                  |        | 8086      |       | 7113     |
| 0000:00:02.0 |           |                  |        | 1013      |       | 00B8     |
| 0000:00:03.0 |           | xen-platform-pci |        | 5853      |       | 0001     |

As osqueryi uses a sqlite backend we can use the standard options and SQL provided by sqlite and for example get a full overview of all tables that are present when using the .table command in the command line interface. This provides the below output, which can be a good start to investigate what type of information is being collected by default and can be used;


The example shown above is a extreme simple example, everyone with at least a bit SQL experience will be able to write much more extensive and interesting queries which can make life as a Linux administrator much more easy.

Script against osquery
Even though using the command line interface is nice for adhoc queries you might have for a single Oracle Linux instance it is more interesting to see how you can use osquery in a scripted manner. As this is based upon sqlite you can use the same solutions you would use when coding against a standard sqlite database. This means you can use bash scripting, however, you can also use most other scripting languages and programming languages popular on the Linux platform. Most languages now have options to interact with a sqlite database. 

Obtaining OPCinit for Oracle Linux

When deploying an Oracle Linux instance on the Oracle Public Cloud you will most likely use the Oracle Linux default templates. That is, up until the moment the moment that you need more than what is provided by the template.

It might very well be that at one point in time you feel that scripting additional configuration to be used after deployment is no longer satisfactions and for some reason you would like to have your own private template. Oracle provide some good documentation on how to do this. You can read some of this at the "Using Oracle Compute Cloud Service" documentation under the "Building Your Own Machine Images" section.

The documentation however lacks one very important point, you can find references about using OPCinit when creating your template. Up until recent the entire OPCinit was missing online and you would not be able to download it. You could reverse engineer OPCinit from an existing template and use it however the vanilla download was not available and it was not available on the Oracle Linux YUM repository.

Now Oracle has solved this by providing a download link to a zip file containing two RPM's you can use to install in your template that will ensure it will make use of OPCinit.

You can download OPCinit from the Oracle website on this location. Unfortunate it is not available on the public Oracle Linux YUM repository so you have to download it manually.

Sunday, October 02, 2016

Oracle Linux and opcinit for package installation

When deploying an Oracle Linux instance on the Oracle public cloud you will get a vanilla installation based upon the template you have selected. In some cases that might very well be exactly what you want and what you need. However, in many cases you want to ensure you have a Oracle Linux deployment that is tailored to your liking, or to the needs of the end-users who will use this machine.

You will have a couple of options to ensure your Oracle Linux instance is more than only the template based deployment. For once, you can build your own template and include everything you think is needed in a template and is specific to the need of your company or your customers.

Another and more direct and agile way of doing things is using the post deployment scripting options. Whenever you deploy a standard Oracle Linux template provided by Oracle it will run opc-init (or also named opcinit). The opc-init scripting is provided by Oracle and runs every time you create an instance based upon the template it is included in.

What opc-init is intended for is to do the final configuration, prebootstrapping tasks, that cannot be included in the template. For example, including specific packages to be installed that should not be part of the template or running specific application configuration scripts. Also the option is provided to run chef recipes that can do a full configuration and deployment for you.

Using the userdata section
Whenever you like to influence, or rather add, functions to the standard tasks opc-init is executing when you deploy a new Oracle Linux instance you will have to ensure you include this in the “userdata” section of orchestration JSON. Within the Oracle Public Cloud an orchestration describes the entire configuration of your instance in a JSON format and you can use this to fully automate your deployment.

Even if you do not use an API and the JSON orchestration file and you are using the web interface to create a new Oracle Linux instance, in the background still the orchestration JSON is used and you are given the option to add additional information to the used orchestration JSON file, or more specific to the  userdata section.

In the above screenshot you can see how we add an execution step to be executed by opc-init. The example shows that we want to have httpd installed to the Oracle Linux instance we deploy.

Adding packages
In case you want to add packages to your Oracle Linux deployment you can use the “packages” option, as we already showed in the above section. The part you have to add to the overall userdata of the JSON, or by using the web interface is shown below:

 "packages": ["httpd"],

In essence there is nothing more to it than ensuring this part is included and you will have httpd installed as soon as your Oracle Linux installation becomes available.

In case you would require more than one single package the JSON should look like the one below to ensure Oracle Linux and opcinit will recognize all packages and will install them. In this example we will install both httpd and wget on Oracle Linux by invoking the packages option for opcinit.

 "packages": ["httpd", "wget"],

Doing more with opcinit
Adding packages is only a small example of what can be done by using opcinit. However, in a lot of cases this will be what is required and what will help you in building a better deployment in most simple cases.

Wednesday, September 21, 2016

Oracle Linux - retrieve openssh-key data from REST API

After posting my blogpost on the REST API within the Oracle Compute Cloud and how to use this from within Oracle Linux if you deploy on the Oracle Compute Cloud I received an email asking me how to handle the fact that the public-keys an contain multiple keys.

Public-keys response of the REST API provides SSH public keys specified while creating the instance, where{index} is a number starting with 0. public-keys/{index}/openssh-key

The provided example in the original post was on how You will be able to access the public-keys by executing the following curl command:

curl public-keys/{index}/openssh-key

This example is a command example and not a programmable example of how to implement code that can do this for you. As an example I have written the below code example and placed it on github. It provides a BASH script which can be used in conjunction with Oracle Linux. It will most likely run on other distributions as well without any issue however it is not tested.

#   Example script to show how you can get the public keys for a instance
#   that have been promoted. Those keys can for example be used to create 
#   a new OS account with trusted keys. This is in a way the same as is 
#   done by the default Oracle templates who do create an "opc" account 
#   with the trusted keys for login which have been selected during the 
#   creation of the new instance. This is tested with Oracle Linux on
#   the Oracle Cloud.
# LOG:
# VERSION---DATE--------NAME-------------COMMENT
# 0.1       20SEP2016   Johan Louwers    Initial upload to
# Copyright (C) 2015  Johan Louwers
# This code is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
# This code is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this code; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
# *
# */

# ccVmApiBaseUrl is used to access the root of the OPC API

# ccVmApiVersion is the main version of the OPC API used by the lib

# ccVmApiMaxWait is the max time (in seconds) the function will wait for a response from the api.

# The function ccVmGetNumOfPublicKeys will return the number of public keys
 function ccVmGetNumOfPublicKeys {
    ccVmNumOfPublicKeys="$(curl -m $ccVmApiMaxWait -f -s $ccVmApiBaseUrl$ccVmApiVersion/meta-data/public-keys/)"

    if [ "$curlStatus" -eq 0 ]; then
      echo $ccVMNumOfPublicKeys | wc -l
        echo "ERROR"

# The function ccVmGetPublicKeyType will return the public key type
 function ccVmGetPublicKeyType {
    ccVmPublicKeyType="$(curl -m $ccVmApiMaxWait -f -s $ccVmApiBaseUrl$ccVmApiVersion/meta-data/public-keys/$1)"

    if [ "$curlStatus" -eq 0 ]; then
      echo $ccVmPublicKeyType
        echo "ERROR"

# The function ccVmGetPublicSshKey will return the public key
 function ccVmGetPublicSshKey {
    ccVmPublicSshKey="$(curl -m $ccVmApiMaxWait -f -s $ccVmApiBaseUrl$ccVmApiVersion/meta-data/public-keys/$1/openssh-key)"

    if [ "$curlStatus" -eq 0 ]; then
      echo $ccVmPublicSshKey
        echo "ERROR"

 function runMain {
    # Get the number of keys available from the API. For this we will use the ccVmGetNumOfPublicKeys
    # function. 


    # Loop through the number of keys found, check the type of the key and if the key type is correct
    # we will use it to add to the account so it can be used as a trusted key. The key type we are
    # looking for in this case is the openssh-key type to be used.

    while [ $i -lt $mainNumberOfKeys ]
       pubKey="$(ccVmGetPublicKeyType $i)"
       if [ $pubKey = "openssh-key" ]
         ccVmGetPublicSshKey $i


The example shown above will provide you a list of public keys which are provided during the creation of the instance. It will execute the runMain function which in turn will call a number of other functions defined in the code.

The main reason for the function based program is that if you want to adopt this in a more complex scripting solution you do want to ensure you can make this a modular code instead of a monolithic script.

Please do check the latest version of the script at github, the above example code will not be maintained within this blogpost and all changes will be done on github. Meaning, bugfixes and improvements will not show above. 

Monday, September 19, 2016

Retrieving IaaS Instance Metadata with Oracle Linux

When creating a new IaaS instance in the Oracle Compute cloud Service you will notice that some information is pre-populated for you and some settings are done. For example, the hostname is set, it will have IP information and you will have the keys populated for the OPC account you have defined.

This is done by a first boot script provided by Oracle which will take information from outside the standard operating system and use it to configure the Oracle Linux instance to your liking. For this it will have to have access to a set of information which is not part of the deployment template as this is specific information for your deployment. The way it is doing it is by making use of the instance metadata which is provided by a REST API.

The rest API provides three sets of information;
  • attributes
  • meta-data
  • user-data
The meta-data is used during the first boot of the instance to ensure everything is configured to your needs. Oracle documentation describes this as follows:

"Two types of metadata are stored within your instances: user-defined instance attributes that you can define explicitly while creating instances, and predefined instance metadata fields that are stored by default for all instances. Scripts and applications running on the instances can use the available metadata to perform certain tasks. For example, SSH public keys that are specified while creating an instance are stored as metadata on the instance. A script running on the instance can retrieve these keys and append them to the authorized_keys file of specified users to allow key-based login to the instance using ssh."

When do you need the REST-API

Even though Oracle will take care of all post deployment configuration when you use a standard Oracle provided template the REST API is interesting in a number of cases.

  1. When you create your own custom template and you will have to build (or modify) a custom first boot script. 
  2. When you are building scripting to install certain specific software and are in need of information from the meta-data set. 

In the first instance you will most likely use primarily the meta-data information set, in the second instance you will most likely use a combination of both meta-data and user-data. The user-data set will enable you to provide (via additional JSON) information to the machine via the REST-API. For example, if you want to indicate if a certain application instance should be the master or a slave in a cluster you can use the user-data information set which will state the role of the machine. In this post we will focus on the meta-data information set and not the user-data information set provided via the REST API

In general, if you are building automation in your deployment model, in whatever way or form, you will at one point in time want to use the REST API.

How to access the REST API

The REST API is accessible from within your instance via the address using http. This means that if you try to access you will be talking to the REST API. As you will not have a graphical user interface you will have to access that by using curl. An example is shown below of how to access the root of the REST API:

[opc@testbox08 ~]$ curl
[opc@testbox08 ~]$

The REST API is compatible with the API you will see at for example at Amazon webservice, this makes it very easy for people who already have invested in automation and scripting based upon the standards developed initially by Amazon.

Based upon your preferences it is good practice to use latest or 1.0 as the version. To be fully sure that your automation scripting will always work and not break when a new latest version is launched it is advisable to use a specific version. In our case in the rest of the blogpost we will use the explicit version 1.0

As it is with REST API’s we can visualize the REST API in a tree shape. One thing you have to remember, even though the REST API provides a loosely compatibility with the Amazon REST API it is not the same. Oracle is also warning for this on the documentation itself in the following words:

You may see certain additional metadata fields, such as reservation-id, product-codes, kernel-id, and security-groups, that aren’t documented. Don’t retrieve and use the values in the undocumented fields

Reviewing the meta-data
The below meta-data components are available, all based upon version 1.0. While some are documented some are not and per statement from Oracle you should not rely on them while creating code.

[opc@testbox08 ~]$  curl
[opc@testbox08 ~]$

Below is an explanation per meta-data component. The majority will work, however some will not work as you might be used to when using Amazon. You will also see that a number of components have the Amazon naming to ensure compatibility with code you might have created for Amazon.

The AMI-ID provides you a unique ID for your IaaS instance. The name AMI-ID comes from Amazon Machine Image.

You will be able to access the ami-id by executing the following curl command:

The ami-manifest-path is something that comes from Amazon and within Amazon it provides a path to the AMI manifest file in Amazon S3. In the Oracle Cloud this is not supported, a AMI manifest is a bit comparable with what oracle understands as a orchestration however not fully the same. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the ami-manifest-path by executing the following curl command:
curl ami-manifest-path

This is by default not supported by Oracle Cloud and comes from the Amazon implementation of the API. The AMI IDs of any instances that were rebundled to create this AMI. This value will only exist if the AMI manifest file contained an ancestor-amis key. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the ancestor-ami-id by executing the following curl command:

This provides within Amazon the mapping to the block devices used by the instance. However, within Oracle this is currently not support even though it is available as an object in the REST API. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the block-device-mapping by executing the following curl command:

The instance ID is the unique ID for your instance running in the Oracle Cloud. Exaample of an instance-ID is : /Compute-acme/

You will be able to access the instance-id by executing the following curl command:

If you started more than one instance at the same time, this value indicates the order in which the instance was launched. The value of the first instance launched is 0. This is a feature from Amazon and is currently not supported within Oracle Cloud officially however; it will provide you a value back which can be used.

You will be able to access the ami-launch-index by executing the following curl command:

The instance-type will provide you information about the sizing of the machine. Memory and CPU resources available for the instance.

You will be able to access the instance-type by executing the following curl command:

Provides the ID of the kernel launched with this instance, if applicable. Officially not supported by Oracle and is a part of the Amazon compatibility. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the kernel-id by executing the following curl command:

The local-hostname will provide the DNS name of the instance as it is used within the Oracle cloud.

You will be able to access the local-hostname by executing the following curl command:

The local-ipv4 will provide you the private IP address of the instance as it is used internally within the Oracle cloud.

You will be able to access the local-ipv4 by executing the following curl command:

Placement will provide you the the Availability Zone in which the instance launched. Officially not supported by Oracle and is a part of the Amazon compatibility. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the placement by executing the following curl command:

Product-codes provide you with the Product codes associated with the instance, if any. Officially not supported by Oracle and is a part of the Amazon compatibility. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the product-codes by executing the following curl command:

The public-hostname will provide the DNS name of the instance as it is used to face outwards of the Oracle Cloud

You will be able to access the public-hostname by executing the following curl command:

The public-ipv4 will provide you the public IP address of the instance as it is used to face outwards of the Oracle Cloud

You will be able to access the public-ipv4 by executing the following curl command:

SSH public key specified while creating the instance, where{index} is a number starting with 0. public-keys/{index}/openssh-key

You will be able to access the public-keys by executing the following curl command:
curl public-keys/{index}/openssh-key

The ID of the RAM disk specified at launch time, if applicable. Officially not supported by Oracle and is a part of the Amazon compatibility. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the ramdisk-id by executing the following curl command:
curl placement/ ramdisk-id

The ID of the reservation. This is available from the Oracle REST API however Officially not supported by Oracle and is a part of the Amazon compatibility. As it is not supported by Oracle it wise to not base any scripting based upon this.

You will be able to access the ramdisk-id by executing the following curl command:
curl placement/reservation-id

The names of the security groups applied to the instance. This is an Amazon feature and not available at the Oracle Cloud. Amazone uses it to ensure that after launch, you can only change the security groups of instances running in a VPC. Such changes are reflected here and in network/interfaces/macs/mac/security-groups.

You will be able to access the ramdisk-id by executing the following curl command:
curl placement/security-groups

Sunday, September 18, 2016

Oracle Cloud master orchestrations

When deploying a new instance in the Oracle Compute Cloud Service you will notice this is driven by an orchestration. If you look at the orchestration tab you will notice it is not a single orchestration, it will contain out of 3 orchestrations. One to bundle them and actually two orchestrations that will create a tangible object. In our case the tangible object will be an instance and a storage object.

  • Master orchestration which binds the two others together.
  • Instance orchestration which will create the actual compute instance.
  • Storage orchestration which will create the required storage volume. 
When working with orchestrations you have to remember that it is only the instruction set on how to create the actual end result. This means that, when you stop an orchestration the end result is not only stopped, it is also removed. When you start it again, the end result will be created again (from scratch).

In the below screenshot you can see 3 orchestrations which were used to create a compute instance named TESTBOX08 and the associated storage which was needed.

If we open the details of master orchestration we can see that this is actually a JSON file containing instructions on what to create.  In essence the master is used to bundle both the instance orchestration and the storage orchestration together and make it a single set of instructions.

As you can see in the above JSON used in the TESTBOX08_master orchestration there is a relationship defined in the master between TESTBOX08_storage and TESTBOX08_instance. The relationship is that you have a oplan named TESTBOX08_instance. This means that TESTBOX08_instance is the actual oplan, An object plan, or oplan, is the primary building block of an orchestration.

As you can see above and in the below example  this is how a  relationship within a oplan is defined.

 "oplan": ,
 "to_oplan": ,
 "type": "depends",

  • oplan : the name of oplan1 
  • to_oplan : Label of an oplan on which oplan1 depends
  • type: Type of the relationship. It must be depends

for this plan that means that the instance depends on the storage, that also means that the storage will be created first and after that the instance as soon as you execute the master orchestration.

Be careful when stopping the master orchestration
When selecting the “stop” command on the TESTBOX08_master orchestration you will get a warning which looks like this:

"Orchestration "TESTBOX08_master" will be stopped. Stopping an orchestration will destroy all objects that were created using the orchestration. If you created instances using this orchestration, those instances will be deleted. If you provisioned storage volumes using this orchestration, those storage volumes will be deleted and all data stored on them will be lost. However, objects created outside this orchestration and merely referenced in this orchestration won't be deleted. At any time, you can re-create objects defined in this orchestration by starting it. Are you sure you want to stop this orchestration?"

As you can see in the above screenshot when you stop the master orchestration it will take some time before the associated orchestrations are stopped. In the screenshot below you can see how all 3 orchestrations are topped.

As you can see in the above screenshot, also the storage has stopped and remembering the warning this would mean that also the attached storage is stopped, which means it is removed. If you check the storage tab you will see that indeed the storage volume is no longer available.

Now, if I select “start” again on the master orchestration it will start executing the storage and the instance orchestration again. It will first start the storage as this is a pre-requisition for the instance. The issue with this way of working, and the risk is, that you have to be aware that your storage is really been removed and is created again from scratch.

Meaning, you will have a fresh environment again, everything you have ever done to the system is lost. Which might be very well something you like to do… however, if you goal was to stop the instance for some period of time and start it again at a later moment and continue working on it again this is not the right direction. In that case, the case you would like to “pause” your instance you have to stop the instance orchestration only, which is described in more detail in this blogpost