Monday, 12 March 2018

Planning audit reports with help from Groovy - Part 2

In the previous post I went through a possible solution for producing audit reports through the planning UI with the help from business rules and Groovy. I mainly covered the background to planning audit reports and went through a demo of the solution, in this post I am going to break down how it was all put together.

Let us start with running the audit report business rule which can be accessed through the standard or simplified user interface. I am going to stick again with the simplified interface as it provides the ability to download the audit report once it has been generated.

Once launched this will present the user with runtime prompts to define how the audit report will be generated.

The audit type and date range are built from Smart Lists within the planning application.

As I explained in the previous post I have limited the list of options for demo purposes but the final solution has all the different audit types available.

The above list is built from the following Smart List.

In Calculation Manager a variable has been created as a runtime prompt with the type set to integer and the planning Smart List selected.

This means when the rule is launched the audit type drop down will be displayed as shown above, the selection that is passed into the business rule is the integer value that matches the Smart List ID.

It is the same concept for the date range, there is a planning Smart List.

The reason why the IDs have been created this way is due to the way the SQL will be generated in the Groovy script, beside the “All” option the IDs match the number of days.

Just like with the audit type, a variable has been created to display the Smart List when launching the rule.

The option to select a select a delimiter for the audit report is handled using a Calculation Manager runtime prompt string variable with the default value set.

There is also a hidden variable which passes the name of the planning application into the business rule.

On to the business rule which basically just calls a Groovy script and passes in the variable names and the values from the runtime prompts.

If you want more detail about getting up and running with Groovy then please refer to one of my previous posts which you can read about here or here.

Before I get on to the Groovy script there are a few points to mention.

I have disabled static compile and sandbox in in order to relax the static type checking and access to Java packages, no additional Java files are required other than the Groovy jar.

The SQL in the script is based on Oracle but wouldn’t require much updating for it to run against SQL Server.

The script does not contain any hardcoding of database connection information including passwords as they are generated at runtime.

I will break down the script into chunks, the variables would have all been defined at the beginning of the script but I have moved them about to make more sense.

The beginning section of the script generates the connection information to the planning application database, a method is called to return the planning system database connection details from the EPM registry.

A connection is made to the planning system database and a SQL query executed to return the planning application database connection information, the planning application name used in the SQL query was passed in from the business rule, the value had been defined in a Calculation Manager variable.

Once the planning application connection details have been returned a connection is then made to it.

If you are not interested in dynamically generating the database connection details then the above can be simply replaced with something like:

sql=Sql.newInstance("jdbc:oracle:thin:@dbserver:port/servicename", "username", "password","oracle.jdbc.OracleDriver")

The next section manages the variables, which like I said would usually be at the start of the script.

The values selected from the drop downs for the list of audit types and date ranges are passed into the script as strings so they are converted to integers.

A date is generated that is then used to form the names for the exported audit text file and zip.

A temporary directory is defined which will be the location where the audit file will be generated before it is compressed and then moved to the planning inbox/outbox explorer location.

The Groovy script is executed by the Essbase process so the temporary directory is located on the Essbase server.

Next a lookup is made to the EPM Registry to retrieve the planning/inbox explorer location, which if you are not aware, is the same location in the LCM import/export directory.

If I ran the audit business rule and with the following runtime prompt selections:

The variables in the Groovy script would be assigned with the following:


The next section builds a SQL statement to retrieve the records from the planning audit table based on the values selected from the business rule runtime prompts.

There are two SQL statements built, they are practically the same, except one of them returns a count of the number of audit records based on the selected criteria, this is because we don’t want to generate an audit file if there are no records returned.

Based on the same selection from the previous example the SQL generated for the count would be:

SELECT count(*) as numrecords FROM hsp_audit_records WHERE 1=1 AND type = 'Data' AND TRUNC(time_posted) >= TRUNC(sysdate) -30

The query is returning a count of the number of audit records where the audit type is data for the last 30 days.

The SQL is then executed and if the number of records returned equals zero then the business rule will terminate with an error and the error message “No records returned” will be available in the job console.

The SQL is then generated to return the audit records which is based on the same criteria and would produce:

SELECT * FROM hsp_audit_records WHERE 1=1 AND type = 'Data' AND TRUNC(time_posted) >= TRUNC(sysdate) -30 order by time_posted desc

If the above SQL query is run using a tool like SQL developer it would produce the following results:

The Groovy script executes the SQL and the column names and rows returned are split by the delimiter and written to a text file

The file is temporarily created in the directory defined in the “tmpFileDir” variable.

The contents of the file will be similar to the results shown in the earlier query.

The audit text file is then compressed into a zip file.

The zip file is created in the same temporary directory.

Finally, the text file is deleted.

The zip file is moved to the planning inbox/outbox location.

The file is then accessible from the planning simplified interface where it can be downloaded or deleted.

As shown in the last post the solution does also have the ability to archive the records in the audit table and includes a planning form to show the number of records in the audit table and when it was last archived, if you would like to find out more information then feel free to get in touch.

Monday, 26 February 2018

Planning audit reports with help from Groovy - Part 1

A common requirement in planning is to be able to produce an audit report, you would think this would be easy as the options are available in the user interface to define which actions to audit.

The problem is that for on-premise planning, once the options have been enabled there is currently no ability through the user interface to produce a report.

The documentation states:

“View results in the HSP_AUDIT_RECORDS table using a RDBMS report writer.”

So not very helpful and a planning administrator may not have access to the planning applications database table “HSP_AUDIT_RECORDS”, this can mean involving a DBA to gain access to the table or have an extract generated, this is not always an easy task.

It is only recently that the ability to view and download audit data was made available in EPM Cloud, so don’t expect this for on-premise any time soon.

Let us first take a quick look at what is available in the cloud, like with on-premise, auditing can be enabled by selecting one or more of the audit types. Once enabled audit data will be displayed in the UI.

There is the ability to filter the different audit types.

The date range can be filtered from a selection of predefined ranges.

The data can then either be viewed or exported and opened in Excel.

Until this functionality is available for on-premise we must look at alternative solutions. As the audit data is stored in one database table I thought maybe using Groovy could come to the rescue again. In a previous post I covered how easy using SQL with Groovy can be, which you can read about here.

I am going to demo the solution first and then go into more detail on how it was put together, it is based on and to be able to use Groovy functionality you need to be on at least Calculation Manager patch but I recommend .008+

I will be using the simplified interface as that is the closest to cloud, even though still a long way off. It also provides an inbox/outbox explorer which is important for this solution.

I have a business rule which will call a Groovy script to handle the audit data, three runtime prompts are displayed after launching the rule.

The dropdown for audit data type allows the selection of all audit data, all audit data excluding data, or data only.

The final solution allows the selection of all the different audit types similar to the cloud, but for simplicity the example I am going to go through will be based on the above, if you are interested in the final solution then please feel free to get in touch.

Just like with the cloud, a date range can be selected for the audit data.

It doesn’t have to be limited to a predefined range, it could just as easily have the option to select a start and end date.

There is also the option to set a delimiter for the audit data, after making the selections the rule is ready to be launched.

If audit data records are returned for the selected values the rule should run successfully.

After running the rule, you can go to the console.

Then under “Actions” the “Inbox/Outbox Explorer” can be accessed.

There will be will be a timestamped archive file available.

This can then either be downloaded or deleted.

Once downloaded, the zip file can be opened which will contain a text file containing the audit data.

The audit text file can be extracted and viewed in a text editor, the data is ordered by date. I could have changed the column headings but for this example I have stuck with the names of the columns in the database audit table.

Alternatively, it can be opened and viewed in Excel.

The functionality may not be exactly the same as the cloud but in terms of the ability to filter the audit records and then download, it comes pretty close.

The solution includes the ability to archive the audit table which basically means the records in the audit database table are transferred to another table which holds the history, then the audit table is cleared down.

The rule has a runtime prompt included to stop it from being accidentally run.

Finally, there is a form which runs a Groovy based rule on load to display the number of records in the audit table, the form displays the last time the audit table was archived. The rules to either archive the data or produce the audit file are also attached to the form.

So how was this all put together? Well you are going to have to wait until the next part where I will cover it in more detail.

Monday, 12 February 2018

EPM Cloud - Capturing rejections in Data Management - Part 2

Moving on to the second part where I am looking at a possible solution to capturing and handling rejections in EPM Cloud Data Management.

Just a quick recap on the first part, data was loaded through Data Management but there were records rejected due to missing entity members in the target application. With help from scripting and the REST API I covered a method to run the data load rule, check the status of the load rule and if there was a failure then download the process log, the process log was parsed for data load rejections if any were found these were sent in an email.

In this post I am going to extend the process for capturing rejections and handle them by using the metadata functionality that is now available in Data Management.

Like in the previous part I will be focusing on the entity member rejections but there is nothing stopping expanding the process to handle other dimensions, though saying that if there are rejections across multiple dimensions then it might be best to concentrate on starting with a cleaner data source file.

I am going to use the same source file where the rows containing entity members “901” and “902” will be rejected due the members not existing in the target application.

The aim will be to extract the rejected entity members from the process log and write them to a text file which can be uploaded to Data Management.

Using a metadata load rule, the members will be loaded based on mapping logic, so in this example they will be loaded as children of “900”.

Even though I am not yet a fan of the simplified dimension editor I thought I had better start using it in my examples as the classic editor is no longer going to be supported.

Once the members have been loaded and a refresh performed, the data load can be run again and there should be no rejections.

Before we can get to that stage a metadata load needs to be setup in Data Management, I am not going to go into great detail around the metadata functionality as I have already covered this topic in a previous blog which you can read all about here.

Under the Target Application Summary dimensions are added.

For the entity dimension I have only enabled the properties I will be using in the metadata load, these are parent, member and data storage. I could have got away with not including data storage and let the system pick the default value but I just wanted to show that properties that are not contained in the source can be included.

The import format is simple as the source file will only have one column containing the new entity members.

The File Type should be set as “Delimited – All Data Type” and the Data column is ignored for metadata loads.

A new location was added and a new data load rule created against the new location.

I assigned to a category named “Metadata” which I have for metadata type loads.

I did not set a file as I am going to include that in the automation script, the directory was defined as the location folder where the script will upload the rejected member file to.

In the target options of the rule the “Refresh Database” property value was set to “Yes” as I want the members to exist in Essbase when the data is reloaded.

On to the mappings, an explicit mapping is defined for “Data Storage” to map “N” from the import format to “never share”.

For the parent I used a “#FORMAT” mapping type which will take the first character of the member and then suffix “00”, if you look back I also mapped the entity member to parent, so as an example member “901” will be mapped to the parent “900”

The entity member property was defined as a like for like mapping, this is because I want the source member to be the same as the target.

If I wanted to expand the automation process further I could add in a step to upload explicit mappings for the new entity members.

Now the rule is in place it is time to go back to my original PowerShell script from the last blog post and modify it to handle the metadata load.

I am going to continue with the script from the point where the process log has been downloaded, this means the data load has taken place and failed.

In the last part I parsed the process log and added any rejections to an email, this time I am going to parse and create a file with the rejected members in.

In the process log the rejected rows of data will contain “Error: 3303”.

I will break the script into bite sized sections and include the variables which usually would be at the start of the script.

The variables above have comments so I shouldn’t need to explain them but I have cut down on the number of variables for demo purposes in this post, the final script includes variables where possible and functions to stop repetition.

On to the next section of the script which checks if there is an existing entity rejection file, if there is, delete.

Each line of the process log is then cycled through and if a line contains “Error: 3303” then it is parsed to extract the entity member, the script could be enhanced to handle multiple dimensions but I am trying to keep it simple for this example.

To break down the parsing section further let me take the following line as an example:

First the line is split by a pipe delimiter and stored in an array, for the above the array would look like:

The second entry in the array contains the member that was rejected, the third entry contains the data record.

The data record is then split by a comma delimiter and stored in an array which looks like.

A test is then made to confirm that the rejected member is part of the entity dimension as that has been defined as the second entry in the data record, if they match the member is then appended to the entity rejection file.

Now I have a file containing all the rejected entity members.

Using the REST API, the file can be uploaded to the location directory in Data Management, as there could be an existing file in the directory with the same name, the delete REST resource is used to remove the file, it doesn’t matter if the file does not exist.

After this section of the script has run, the entity rejection file should exist in the Data Management location folder.

Next the REST API comes into play again to run the metadata load rule that I created earlier in Data Management.

At this point, if I check process details in Data Management it shows that metadata rule has successfully completed.

In the workbench you can see that entity members have been mapped to the correct parent.

Within the target application the entity members have been successfully loaded.

As the metadata has been loaded and pushed to Essbase the export stage of the data load rule can be run again.

This time the data load was successful

The workbench confirms all rows of data have been loaded to the target application.

A quick retrieve verifies the data is definitely in the target application.

After the data load has successfully completed, a step could have been added to run a rule to aggregate the data, once again this can be achieved by calling a REST resource which I have covered in the past.

A completion email could then be sent based on the same concept shown in the previous blog post.

An example of the output from the full automated process would be:

So there we go, an automated data load solution that captures and handles rejections. If you are interested in a similar type of solution then please feel free to get in touch.