Sunday, 15 October 2017

FDMEE - building Essbase dimensions - Part 1

As you are probably aware FDMEE is great at processing and loading data but not so good when it comes to metadata, currently the only way to load metadata without customisation is with a supported ERP source system and even then, the functionality is pretty limited.

In the past I wrote about a way to handle Planning metadata through FDMEE using a custom jython script, so I thought it was time to turn to Essbase and look at a potential solution to building dimensions.

In the last post I demonstrated how easy it is in FDMEE to interact with the Essbase Java API using jython, continuing that theme the method I will go through in this post will also use the Java API.

I am going to take a different approach than I did with loading Planning metadata where it was all controlled by a custom script, this time I am going to create a custom application which will allow the metadata to be loaded into FDMEE before loading to an Essbase database.

In summary, the process flow will be to load a text file containing the Essbase dimension information to FDMEE, map the metadata, export to a text file and then build the dimension using an Essbase load rule.

As usual I am going to try and keep it as simple as possible, the aim here is not to provide a complex step by step guide but to plant ideas and then the rest is up to you.

So let us get on with it, I have put together a source comma separated file which is in parent/child format, the idea is to load it to FDMEE, map, export and then perform a dimension build to the existing product dimension in everybody’s favourite Sample Basic database.


In FDMEE a new custom target application is created.


New dimensions are added to the custom application to match the source file, I understand that in this scenario they are not dimension names and they are dimension build properties but usually you would be loading data by dimension to a target application, as this is a custom application and solution the concept of dimension can be ignored and thought more of as a property.


The source file is in the format of parent member, child member, alias, data storage and consolidation operator so the dimensions are added to reflect this, if there were additional columns in the source file they could easily be added into the custom application, even if there are properties that are required in the dimension build that are not in the source file they could be generated in FDMEE.

One of the properties needs to be assigned a target class of Account for the solution to work, it is not important which one and the remaining can be set a generic.

It is important to note that when working with a custom application the order of the data that is written to the output file will be defined by the order of “Data Table Column Name”, the order is defined as Account, Entity, UD1, UD2 to UD20, AMOUNT.

So in my example the output file will be in the order of ACCOUNT, UD1, UD2, UD3, UD4 which maps to Parent, Child, Alias, DataStorage, Consolidation.

On to the import format, the source is set as a comma delimited file and the target is the custom application that has just been created.


The source columns and column number from the file are mapped to the target, you will notice that there is a target amount column which is added by default, I am not interested in this target column and it is not present in the source but it needs to exist, I just map the source field to 1 and the value to 1 which will become apparent later.

There is nothing to really to say about the location as it is just the default values with the import format is selected.


A new data load rule is created, the import format and source file name are selected and I uploaded the source file to the FDMEE inbox.


In the rule target options the property value has been set to enable the export to a file and the column delimiter will be comma, the export file is required as this will be then used for the dimension build using an Essbase load rule.


In the custom options for the rule I have added some integration options, they basically define the Essbase application, database, dimension and rule name, it will be clearer how they are used later when I go through the jython script.


I have kept the data load mappings extremely simple and in the main they are like for like mappings, though this is where you could get as complex and creative as you like depending how your source file differs from the target dimension build file.


I did add explicit mappings for the data storage member property as the source file contains a more meaningful name than the property values required for an Essbase dimension build.


The Essbase administrator documentation has a table containing all the property codes and the description.

At this point I can run a data load to import the source file, map and then export.


From the workbench, you can see the full import to export process has been successful.

The source to target mappings can be viewed and you will also notice there is an amount column which I fixed to a value of 1 back in the import format.

The output data file name will be generated based on <target_application_name>_<process_id>.dat and will be written to <application_root_folder>\outbox directory.


The output file is ready for a dimension build using an Essbase load rule.


I am not going to go through the process of how to build a load rule in the EAS console but here is the completed version.


As the file has a header record this has been set to be skipped in the load rule, the amount column has been ignored in the field properties of the rule.

The rule is named the same as the integration option value which was defined earlier in the FDMEE load rule.

The dimension could now be built using the rule and file but we are going to get FDMEE to do that using a jython script.

If you look in the FDMEE log for the process that was just executed you will see reference to jython event scripts that are called at different stages throughout the process.

For example, after the export file has been created there will be the following in the log

INFO  [AIF]: Executing the following script: <application_root_folder>/data/scripts/event/AftExportToDat.py

The scripts are not there by default so you may get a warning saying the script does not exist, if they don’t exist it is just a matter of creating the script and it will be executed next time the process is run.

Please be aware that if event scripts have been enable and the script exists it will always be executed so you need to code it so it triggers only the section of the script you are interested in for this process.

I am going to use the above event script to carry out the dimension build using the Essbase Java API.

Now I am not going to go through every single line of the jython script I have wrote and only stick to the important sections, the script does contain comments so hopefully it provides you enough information.

In summary, the Essbase classes that are required to perform a dimension build are imported.

The target application name and process ID are stored in variables.

The values from the integration options in the FDMEE load rule are stored using the API method “getRuleDetails”, these are held in “RULE_ATTRx

The target Essbase application and database name are then generated from the retrieved values.

The full path to the exported text file and dimension build error file are generated.


The next section is where the Essbase JAVA API comes into play, a login to the Essbase server is made using a single sign-on token so no clear text passwords are stored.

A custom function is called which adds some additionally logging to the process logs which I will show later, it is not actually necessary to do this.

The dimension build is run using the “buildDimension” method passing in the stored rule name, load and error file.

If an error file is generated it is read and the errors are added to the process log.


Now that the jython is in place the export stage of the FDMEE load rule can be run again.


The process details confirm that the export and dimension build were successful, the dimension build file can also be downloaded.


The process steps include the additional custom logging I was referring to earlier.


Opening the outline in the EAS console shows the new members and properties have been successfully created in the product dimension.


Let me demonstrate what happens when dimension build errors occur.


This time I have added an invalid record to the source file which is highlighted above, the full data load process is then executed again.

Instead of a green tick, process details displays a warning icon which was generated using the custom logging function in the jython event script.


The process log contains the full location to the dimension build error file and includes the rejections in the log.


Now we have the option to load data and metadata to a target Essbase database.

You don’t have to use the custom application method, if the source file does not need any kind of mapping or require visibility of what is being loaded through the dimension build, then it could all be done with a single FDMEE custom script which would be practically along the same lines as the code in the event script.

So what if your source is not a file and is a relational database, what if you want to be able to run incremental builds for multiple dimensions, what if you don’t want to create an export file and instead have an Essbase SQL dimension build? Well look out for part 2 where all of these questions will be answered.

Tuesday, 26 September 2017

FDMEE – Load rules driven by substitution variables

A question has come up a few times around using Essbase substitution variables to define the period for a FDMEE data load rule  so I thought of putting a post together on the subject.

Unfortunately at present whether it is on-premise or the cloud it is not possible to use sub vars in period mappings.

The following type of period mapping would be nice for a Essbase/Planning target application but will generate errors if you try to use it.


Even though it is not currently possible to put the sub vars directly into period mappings there are alternatives and I am going to go through one of the possible methods.

Let us start with EPM Cloud, data management in the cloud does not allow any kind of custom jython scripting yet so the options are limited from the UI perspective but it does have the benefit of the EPM Automate utility and REST API.

Both EPM Automate and REST have commands to run data load rules and retrieve sub vars so it should be possible to run a rule based on sub var values. I am not going to spend time going into these commands as I have covered both of them in previous blogs and you can read in greater detail about them here and here

Right, I am going to assume you have read the previous posts or already know all about the commands.

Within the planning application I have the following sub vars that define the current month and year.


The period name defined in data management is a combination of the short name for the month and the last two digits of the year.


With EPM Automate I can use the getsubstvar command to retrieve the sub var values.

The format for getsubvar is:

epmautomate getsubstvar CUBE_NAME|ALL [name=VARIABLE_NAME]


Then with the rundatarule command, run the load rule with the current period name.

The format for rundatarule:

epmautomate rundatarule RULE_NAME START_PERIOD END_PERIOD IMPORT_MODE EXPORT_MODE [FILE_NAME]

In my example I am going to load for a single period but could easily handle multi period loads.


To achieve the above there is some custom scripting required as the sub vars need to be retrieved, the period name constructed and the load rule called.

I know I say this every time but it is up to you which scripting language you decide to opt for, it all depends which one you feel most comfortable with.

I am going to stick with PowerShell as it is easily accessible, not difficult to pick up and it is definitely time to put the old windows command line scripting to bed.

The first part of the script sets all the variables and should be self-explanatory.


The main section of the script does the following:

Logs into EPM cloud using EPM Automate based on the variables that have been defined.

Checks if the login was successful, if not exit.

Retrieves and stores the value for the current month variable using the EPM Automate getsubstvar command

The month is then extracted as the getsubstvar command returns an array of three lines.


The variable value is in the second line which I then split out based on the equals sign and store the month name.

The same method is then repeated to extract the year from the current year variable.

The period name is then constructed based on the month and year.

Finally the data load rule is executed using the rundatarule command and passing in the parameters from the stored variables.


So with a simple script, data load rules can be executed based on sub vars.

Moving on to REST, it is possible to achieve the same results without the requirement of having the EPM Automate utility installed and removes the need to keep updating and installing newer versions of the utility.

I have already provided the links to previous posts where I have gone into detail on using the REST API but in summary there is a resource available to return substitution variables, the format is:

https://<cloud_instance/HyperionPlanning/rest/v3/applications/<appname>/substitutionvariables/<subvarname>

An example using a REST client with a GET method request:


The sub var details are returned in JSON format.

The format for running a data load rule using REST is:

https://<cloud_instance>/aif/rest/V1/jobs

A POST method is required with the data rule parameters in the body of the request in JSON format.


Running the request will return details about the job including the status and ID.


Converting this to a script does not require much more effort than the last example, first of all variables are defined.


The main section of the script follows the same process as the previous script and with the comments I have provided I don’t feel I have to go into any more detail.


If I wanted I could expand the script to keep checking the status of the data load rule until it completes which I have done in other posts.

So that is one possible method covered for EPM cloud using either EPM Automate or REST, what about on-premise?

From FDMEE 11.1.2.4.210 the REST API is available to run data load rules but what is not yet available is the ability to return sub var information with the REST API, this requires an update to planning and your guess is good as mine as when that will happen.

There are different ways to obtain sub var details such as MaxL or one of the available APIs and this is a solution I have come up with using custom jython scripting in FDMEE.

I will get on to the script shortly but the idea is to have the same parameters available as when running a rule but also include ones for the current month and year sub vars.

I registered a custom script which has parameters for data load rule name, the rule import and import modes which have SQL queries defined to return the possible values.


There are two parameters for sub vars which will define which variables to hold the current month and year.

In my example the target is going to be a planning application but the concept would be exactly the same for a Essbase application.

Once registered the script is available for execution.


When the script is run, the parameters are displayed and values can be entered.


Selecting either the Import or Export mode will run a SQL query behind the scenes and generate a list of possible values.


The jython script is then called and the parameters are passed into the script.

The script itself uses a combination of the Essbase Java API to retrieve the sub var values and then the REST API to run the data load rule.

There is no additional configuration required to run the script, such as adding Jar files as the Essbase API is directly accessible from FDMEE, the same goes for calling the REST API and managing JSON, also there are no hardcoded passwords as authentication is handled by a SSO token.

I am not going to show every line of the script but in summary the required Java classes are imported, then the variables are defined using a combination of static and ones passed in from running the script.


With the use of Essbase Java API the current month and year sub var values are retrieved and from these the period name is constructed.

JSON is generated for the body of the REST request and then the FDMEE REST resource is called to run the data load rule.


In my example I have kept the code as simple as possible, the full version handles exceptions such as if there were problems calling the REST resource.

Once the custom script has been run there are two process log entries, one for the custom script execution and one for the running of the data load rule.


The process log provides further details on running the rule using REST.


The custom script can be run from the FDMEE UI or it can be run remotely as it is possible to run custom scripts using the REST API.

Here is an example using a REST client to run a FDMEE custom script:


I also put together a PowerShell script that takes user input to define parameters and then calls the REST resource to run the custom jython script.


So now we have the ability to run an FDMEE data load rule where the period name is based on Essbase substitution variables without having to log into FDMEE through workspace.

Well that wraps another post, I hope you found it useful.

Saturday, 2 September 2017

EPM Cloud - Connections are here

In the latest EPM Cloud release there is some new functionality that I thought would be worth covering, it was originally titled EPM connect in the presentations I have previously seen and allows seamless connectivity between different EPM cloud services.

Before the latest release if you had multiple EPM Cloud services such as PBCS, EPBCS, FCCS, each would act independently, to access the UI you would have to use a separate URL for each application and there was no link between them, in the on-premise world you can access all your applications through a single point which is workspace.

With this EPM Cloud release it is now possible to connect to multiple services from a source application and with the use of navigation flows make it all feel unified by embedding clusters or cards from different services into the source application.

To be able to create a connection the following sources are currently supported.
  • Oracle Planning and Budgeting Cloud
  • Oracle Enterprise Planning and Budgeting Cloud
  • Oracle Financial Consolidation and Close Cloud
  • Oracle Tax Reporting Cloud
From this source, you are then able to make a connection to any of the following EPM Cloud services.
  • Oracle Planning and Budgeting Cloud
  • Oracle Enterprise Planning and Budgeting Cloud
  • Oracle Financial Consolidation and Close Cloud
  • Oracle Tax Reporting Cloud
  • Oracle Account Reconciliation Cloud
  • Oracle Profitability and Cost Management Cloud
  • Oracle Enterprise Performance Reporting Cloud
The matrix from the documentation provides a clearer picture:


If the services are in the same domain, then the connection can be configured using a service administrator account.

From information in the announcements and new features update document It also looks possible to connect to services that are in different domains and in different data centers, this is achieved by configuring the domains to use single sign-on through the same identity provider.

With this release it is not possible to configure connections in different domains within the same data center.

I am not sure yet why it is currently only supported across different data centers and not in the same data center but no doubt there is a technical reason behind it (I have now been informed it is due to current restrictions in shared identity management), I was hoping to have a test out setting up connections between different domains but unfortunately the domains I have access to are all in the same data center so I will have to wait until it is supported.

This slightly conflicts to what is described in the administrator documentation:

“If the target and source subscription instances are not on the same identity domain, then you might not be able to establish a connection between them. You must relocate one of the instances so that the source and target instances share an identity domain.”

I believe the above statement about relocating instances is going to be removed from the documentation as it should only be done in exceptional circumstances, once it has been removed I will update this post.

Anyway, I will go through setting up connections as hopefully it will become clearer what this new functionality is all about.

To create a new connection to a cloud service there will be a new card called “Connections” which is part of the tools cluster.


Alternatively connections can be accessed from the navigator.


If you don’t see “Connections” it may be because you are using a custom navigation flow, it should be available in the default flow and can be added to any custom flows.

Once you go to the Connections card you can manage connections so let us add a new connection.


You will then be presented with the list of EPM cloud services that you can make a connection to.


In this example I am going to make a connection from a source PBCS application to a target EPBCS application but it could be simply any of the EPM cloud services, the functionality will operate in the same way.


Once the connection has been validated the target application name will be populated, you may be wondering what happens when the service administrator password is changed, well I will get on to that later in the post.

Please note the instance you are connecting to will have to be upgraded to the latest release in order for the connection to be successful.

I did test what would happen if I tried to create a connection to a EPM cloud service which is a different domain but in the same data center and received an error about the connection having to belong to the same domain.


Once the connection has been saved it be added under manage connections where it can be edited or deleted.


If you click the home icon and then go to the navigator you will see the new connection.


To navigate to the target application, you just have to click the connection name and the application will open in the same browser tab, alternatively click the open window icon and the application will open in a new browser window. So a user can now access multiple cloud services through a single access point with one login.

You can build up connections to all the EPM cloud services you have access to.


The above is a true reflection of what the connections will look like, unlike what is currently in the documentation where connections have been created to a PBCS instance but named like they are to different cloud services.

I am going to stick with a single connection to EPBCS.

To add a card/cluster from the target application to your source application, go to navigation flows, select the navigation flow you want to update.


Select “Add Existing Card/Cluster”, this will open the usual library window but now on the left side you will have the option to select any of your connections.

From the library I selected the EPBCS connection and then the Revenue card.


It is worth mentioning that:

“For this release, cards cannot be directly selected from Oracle Account Reconciliation Cloud, Oracle Enterprise Performance Reporting Cloud, and Oracle Profitability and Cost Management Cloud using the Add Existing Card/Cluster option.”

The Revenue card is added to navigation flow with the connection name “EPBCS” and navigation flow name “Default”


Once the navigation flow has been saved and is active, if you select the home icon and then “Reload Navigation Flow”


The Revenue card from the target EPBCS application is now part included in the source application’s main screen or as Oracle likes to call it the springboard.


Selecting the Revenue card will seamlessly connect to the target application and display as if it was part of the source application.


I could have easily picked to add in an existing cluster from the target EPBCS application.


Once the flow has been saved and reloaded the cluster is displayed from the target application.


It is possible to mix it up and create a cluster which has cards from the local application or any of the EPM cloud services, Oracle like to term this as a mash-up but that saying means something totally different to me.

So let’s add a new cluster.


Now create a new card against the local application and newly created cluster, assign a form to the card.


Next to create another card against the same cluster but this time assign a dashboard from the EPBCS application, the same type of artifacts can be selected whether it is a local or remote connection.


The navigation flow now has a new cluster with a card pointing to a form in the local PBCS application and a card pointing to a dashboard in the remote EPBCS application.

Save the flow, activate and reload and the cluster and cards are displayed in the springboard.


Once again it is possible to seamlessly flip between viewing a form in the local application


and a dashboard in the EPBCS application.


As you would expect the clusters/cards also show up in the navigator


We can take it step further and create a card with tabs and mix it up between local and target cloud services, the following example I have added vertical tabs to a new card with one tab from the local PBCS application and one from the remote EPBCS application, the card is assigned to the existing cluster.


You know the drill, save the flow, activate, reload and the new card is available in the springboard and from the navigator.


Opening the card will now allow the viewing of vertical tabs which are looking at forms, dashboards and reports across the local and remote application.


It is not limited to vertical tabs and we can mix it up in horizontal tabs, for example in the above “Mash it up” card I have added a new tab, the tab has been set to tabular which automatically sets the orientation to horizontal.


Next I added to two new sub tabs, one looking at a form in the local PBCS application and another form in the EPBCS application.


This was then saved and you can see it has been added to the card I created earlier.


Now I have a card that consists of vertical and horizontal tabs which work across cloud services, this is great for the end user who does not need to be concerned about logging into multiple applications and this is where you can appreciate the power of navigation flows across multiple services.


Remember in these examples I am just looking at PBCS and EPBCS, you can build flows across any of the EPM Cloud services which you have access to.

For example I can add a connection to EPRCS and select from Reports, Snapshots or Packages.


Just like with all the previous examples the card/clusters/tabs will act in the same way.


You may be asking what about security, do the navigation flows honour access permissions across the cloud services, well the answer is yes and really it has to for the functionality to be of any use.

From my initial testing a user will be able to see all the defined connections in the navigator even if they don't have access to some of the services, if the user does not have access and tries to access one of the connections they will not be able to and will receive an error.

Up to now I have been using the service administrator account to the demo the flows, let us take a quick look at the difference when a standard user accessing them.

Back to the Revenue card which is part of the EPBCS connection, the admin user can access multiple horizontal and vertical tabs.


Now for a standard user who has only been given access to Revenue dashboard which is part of the first vertical tab, the user does not have access to any of the forms or dashboards that are part of the tabs.


When the user accesses the Revenue card they can only see the revenue dashboard, so the access permissions are being honoured.

One final example with the “Connect Demo” cluster, the admin has access to all the cards that are part of the cluster.


The user does not have access to the form that is part of the “From EPBCS” card so when they view the cluster they don’t see the card.


The same goes for the “Mash it up” card with the mix of vertical and horizontal tab across cloud services, the admin has full access.


The user will only see what they have been given access to in the local and target applications.


This applies to all the access permissions for the user, so say they access a form across a service they will only see the members they have access granted to in the target application.

Right, going back to what I mentioned earlier when setting up a connection, what happens when the service administrator password is changed.

Well, initially everything looks ok but that must be because of caching, if the service is restarted or after the maintenance window, the connections will be greyed out in the navigator and inaccessible, clusters/cards/tabs from the target applications will not be visible any more in the source application until the connection has been updated with valid credentials.


I can see that as being a bit of pain because the administrator will have to remember to update all the connections each time their password is changed, it is easy to forget and yet another thing to add to the list like having to updating the password in EPM Automate or API scripts.

It would nice if there was some kind of sync option, it would be great if there was an command in EPM Automate or the API to manage the service administrator password, also the ability to find out how many days left until the password expires which would help with scripting. I am not sure how feasible it is though due to security concerns. Oracle if you are watching let me know your views on this :)

Finally on to migrations, if you take a look at the artifact list you will see there is new addition under Global Artifacts called “Service Connections”, this will contain each of the connections that have been configured.


If you export the snapshot there will be an XML file for each connection.


The XML is quite simple but what you need to be aware of is that is does not contain a password, so if you are migrating or restoring then after the import you will need to edit each connection and enter the password.


Well I definitely think that is enough for this post, I am pretty impressed with the new functionality and it is definitely a step in the right direction, until next time..