Saturday, 2 September 2017

EPM Cloud - Connections are here

In the latest EPM Cloud release there is some new functionality that I thought would be worth covering, it was originally titled EPM connect in the presentations I have previously seen and allows seamless connectivity between different EPM cloud services.

Before the latest release if you had multiple EPM Cloud services such as PBCS, EPBCS, FCCS, each would act independently, to access the UI you would have to use a separate URL for each application and there was no link between them, in the on-premise world you can access all your applications through a single point which is workspace.

With this EPM Cloud release it is now possible to connect to multiple services from a source application and with the use of navigation flows make it all feel unified by embedding clusters or cards from different services into the source application.

To be able to create a connection the following sources are currently supported.
  • Oracle Planning and Budgeting Cloud
  • Oracle Enterprise Planning and Budgeting Cloud
  • Oracle Financial Consolidation and Close Cloud
  • Oracle Tax Reporting Cloud
From this source, you are then able to make a connection to any of the following EPM Cloud services.
  • Oracle Planning and Budgeting Cloud
  • Oracle Enterprise Planning and Budgeting Cloud
  • Oracle Financial Consolidation and Close Cloud
  • Oracle Tax Reporting Cloud
  • Oracle Account Reconciliation Cloud
  • Oracle Profitability and Cost Management Cloud
  • Oracle Enterprise Performance Reporting Cloud
The matrix from the documentation provides a clearer picture:

If the services are in the same domain, then the connection can be configured using a service administrator account.

From information in the announcements and new features update document It also looks possible to connect to services that are in different domains and in different data centers, this is achieved by configuring the domains to use single sign-on through the same identity provider.

With this release it is not possible to configure connections in different domains within the same data center.

I am not sure yet why it is currently only supported across different data centers and not in the same data center but no doubt there is a technical reason behind it (I have now been informed it is due to current restrictions in shared identity management), I was hoping to have a test out setting up connections between different domains but unfortunately the domains I have access to are all in the same data center so I will have to wait until it is supported.

This slightly conflicts to what is described in the administrator documentation:

“If the target and source subscription instances are not on the same identity domain, then you might not be able to establish a connection between them. You must relocate one of the instances so that the source and target instances share an identity domain.”

I believe the above statement about relocating instances is going to be removed from the documentation as it should only be done in exceptional circumstances, once it has been removed I will update this post.

Anyway, I will go through setting up connections as hopefully it will become clearer what this new functionality is all about.

To create a new connection to a cloud service there will be a new card called “Connections” which is part of the tools cluster.

Alternatively connections can be accessed from the navigator.

If you don’t see “Connections” it may be because you are using a custom navigation flow, it should be available in the default flow and can be added to any custom flows.

Once you go to the Connections card you can manage connections so let us add a new connection.

You will then be presented with the list of EPM cloud services that you can make a connection to.

In this example I am going to make a connection from a source PBCS application to a target EPBCS application but it could be simply any of the EPM cloud services, the functionality will operate in the same way.

Once the connection has been validated the target application name will be populated, you may be wondering what happens when the service administrator password is changed, well I will get on to that later in the post.

Please note the instance you are connecting to will have to be upgraded to the latest release in order for the connection to be successful.

I did test what would happen if I tried to create a connection to a EPM cloud service which is a different domain but in the same data center and received an error about the connection having to belong to the same domain.

Once the connection has been saved it be added under manage connections where it can be edited or deleted.

If you click the home icon and then go to the navigator you will see the new connection.

To navigate to the target application, you just have to click the connection name and the application will open in the same browser tab, alternatively click the open window icon and the application will open in a new browser window. So a user can now access multiple cloud services through a single access point with one login.

You can build up connections to all the EPM cloud services you have access to.

The above is a true reflection of what the connections will look like, unlike what is currently in the documentation where connections have been created to a PBCS instance but named like they are to different cloud services.

I am going to stick with a single connection to EPBCS.

To add a card/cluster from the target application to your source application, go to navigation flows, select the navigation flow you want to update.

Select “Add Existing Card/Cluster”, this will open the usual library window but now on the left side you will have the option to select any of your connections.

From the library I selected the EPBCS connection and then the Revenue card.

It is worth mentioning that:

“For this release, cards cannot be directly selected from Oracle Account Reconciliation Cloud, Oracle Enterprise Performance Reporting Cloud, and Oracle Profitability and Cost Management Cloud using the Add Existing Card/Cluster option.”

The Revenue card is added to navigation flow with the connection name “EPBCS” and navigation flow name “Default”

Once the navigation flow has been saved and is active, if you select the home icon and then “Reload Navigation Flow”

The Revenue card from the target EPBCS application is now part included in the source application’s main screen or as Oracle likes to call it the springboard.

Selecting the Revenue card will seamlessly connect to the target application and display as if it was part of the source application.

I could have easily picked to add in an existing cluster from the target EPBCS application.

Once the flow has been saved and reloaded the cluster is displayed from the target application.

It is possible to mix it up and create a cluster which has cards from the local application or any of the EPM cloud services, Oracle like to term this as a mash-up but that saying means something totally different to me.

So let’s add a new cluster.

Now create a new card against the local application and newly created cluster, assign a form to the card.

Next to create another card against the same cluster but this time assign a dashboard from the EPBCS application, the same type of artifacts can be selected whether it is a local or remote connection.

The navigation flow now has a new cluster with a card pointing to a form in the local PBCS application and a card pointing to a dashboard in the remote EPBCS application.

Save the flow, activate and reload and the cluster and cards are displayed in the springboard.

Once again it is possible to seamlessly flip between viewing a form in the local application

and a dashboard in the EPBCS application.

As you would expect the clusters/cards also show up in the navigator

We can take it step further and create a card with tabs and mix it up between local and target cloud services, the following example I have added vertical tabs to a new card with one tab from the local PBCS application and one from the remote EPBCS application, the card is assigned to the existing cluster.

You know the drill, save the flow, activate, reload and the new card is available in the springboard and from the navigator.

Opening the card will now allow the viewing of vertical tabs which are looking at forms, dashboards and reports across the local and remote application.

It is not limited to vertical tabs and we can mix it up in horizontal tabs, for example in the above “Mash it up” card I have added a new tab, the tab has been set to tabular which automatically sets the orientation to horizontal.

Next I added to two new sub tabs, one looking at a form in the local PBCS application and another form in the EPBCS application.

This was then saved and you can see it has been added to the card I created earlier.

Now I have a card that consists of vertical and horizontal tabs which work across cloud services, this is great for the end user who does not need to be concerned about logging into multiple applications and this is where you can appreciate the power of navigation flows across multiple services.

Remember in these examples I am just looking at PBCS and EPBCS, you can build flows across any of the EPM Cloud services which you have access to.

For example I can add a connection to EPRCS and select from Reports, Snapshots or Packages.

Just like with all the previous examples the card/clusters/tabs will act in the same way.

You may be asking what about security, do the navigation flows honour access permissions across the cloud services, well the answer is yes and really it has to for the functionality to be of any use.

From my initial testing a user will be able to see all the defined connections in the navigator even if they don't have access to some of the services, if the user does not have access and tries to access one of the connections they will not be able to and will receive an error.

Up to now I have been using the service administrator account to the demo the flows, let us take a quick look at the difference when a standard user accessing them.

Back to the Revenue card which is part of the EPBCS connection, the admin user can access multiple horizontal and vertical tabs.

Now for a standard user who has only been given access to Revenue dashboard which is part of the first vertical tab, the user does not have access to any of the forms or dashboards that are part of the tabs.

When the user accesses the Revenue card they can only see the revenue dashboard, so the access permissions are being honoured.

One final example with the “Connect Demo” cluster, the admin has access to all the cards that are part of the cluster.

The user does not have access to the form that is part of the “From EPBCS” card so when they view the cluster they don’t see the card.

The same goes for the “Mash it up” card with the mix of vertical and horizontal tab across cloud services, the admin has full access.

The user will only see what they have been given access to in the local and target applications.

This applies to all the access permissions for the user, so say they access a form across a service they will only see the members they have access granted to in the target application.

Right, going back to what I mentioned earlier when setting up a connection, what happens when the service administrator password is changed.

Well, initially everything looks ok but that must be because of caching, if the service is restarted or after the maintenance window, the connections will be greyed out in the navigator and inaccessible, clusters/cards/tabs from the target applications will not be visible any more in the source application until the connection has been updated with valid credentials.

I can see that as being a bit of pain because the administrator will have to remember to update all the connections each time their password is changed, it is easy to forget and yet another thing to add to the list like having to updating the password in EPM Automate or API scripts.

It would nice if there was some kind of sync option, it would be great if there was an command in EPM Automate or the API to manage the service administrator password, also the ability to find out how many days left until the password expires which would help with scripting. I am not sure how feasible it is though due to security concerns. Oracle if you are watching let me know your views on this :)

Finally on to migrations, if you take a look at the artifact list you will see there is new addition under Global Artifacts called “Service Connections”, this will contain each of the connections that have been configured.

If you export the snapshot there will be an XML file for each connection.

The XML is quite simple but what you need to be aware of is that is does not contain a password, so if you are migrating or restoring then after the import you will need to edit each connection and enter the password.

Well I definitely think that is enough for this post, I am pretty impressed with the new functionality and it is definitely a step in the right direction, until next time..

Tuesday, 15 August 2017

FDMEE - Custom Scheduler - Part 2

In the last part I went through an example solution to build a custom scheduler which could be accessed through the FDMEE UI, it was created using a combination of jython and the ODI Java API, the post covered in detail the process to create a new batch schedule but could easily be adapted to schedule data and metadata load rules or custom scripts.

In this post I want to continue where I left off and move on to updating, deleting and viewing active schedules, also look at the possibilities of running these tasks outside of the FDMEE UI.

Let us start off with updating an existing schedule, a custom script was registered which has parameters defined to select the schedule, set the start and end dates and repetition in minutes or hours. These parameters will be displayed when the custom script is executed and then passed into the jython script which will then update the schedule.

The parameter “SCHEDULE_ID” has been defined to call a query called “Batch Info”, this SQL query is against the ODI repository which hold the scheduling information, the query has been defined to return an understandable description and the ODI internal ID for the schedule, this ID is important as it will be used with the ODI Java API to access the selected schedule.

You will see the query in action when I execute the custom script which now appears in the UI under the “Custom Batch Scheduler” group it has been assigned to.

Executing the “Update Batch Schedule” script displays a window with the parameters that were defined in the script registration.

Now if select the schedule to update the SQL query comes into action and the script name, start to end date, repetition and schedule ID are displayed ready for selection.

It would be nice if there was the functionality to base a SQL query on the results of a SQL query but unfortunately that is not yet available in FDMEE so the remaining input fields need to input manually, as the schedule is being updated this is not much of an issue because you would need to enter new values anyway.

Now the new schedule information has been entered the script can be run and a custom message is displayed if the update was successful.

The process logs also display the status of executing the custom script.

The FDMEE log associated with the process contains the updated schedule information.

To prove the schedule has been updated I can switch over to the ODI console where it should have a new start date of 20th of August at 8am.

As I explained in the last part the end date has to be calculated and set as a maximum cycle duration, the execution cycle includes the interval between repetitions which correctly matches the value entered when running the custom script.

To prove the scheduling information had been update correctly I could have also just run the update schedule custom script again and selected the option to return the active schedules, this would have returned the updated values from the ODI repository.

What you do need to watch out for with populating parameter value fields is there is currently a 50-character limit, the SQL will return no problem, for example

As the above text is are over 50 characters then after selecting you would be hit with an unfriendly ADF error.

The magic to update the schedule is all handled in the jython script, the script follows the same concept as what I went through in the last part so there is no need for me to show all the script again.

The parameters values are stored, the schedule duration is calculated by finding the difference in minutes between the start and end date.

A connection is made to the ODI repository and then the schedule is returned by searching based on the ID that was passed into the script.

The values of the schedule are then updated to the ones defined when running the script, these values are then committed and the agent schedule updated.

So that is creating and updating schedules covered off, on to deleting a schedule.

Once again a custom script is registered, this time there is only the requirement for one parameter and that is to select the schedule to delete, the query is exactly the same as the one used in the update script.

After the script has been registered, it is available in script execution under the group which it was registered to.

When the script is executed there is one parameter available to select the schedule to delete.

Selecting the magnifying glass will run the SQL query to run all the active batch schedules.

I selected the first schedule in the list to be deleted.

A custom message is displayed if the delete was successful.

If I run the script again and view the schedules it confirms the schedule has definitely been deleted.

The jython script is much simpler this time as all that is required is to retrieve the ID of the schedule from the custom script parameter values, connect to the ODI repository, find the schedule by the ID, delete the schedule, commit and then update the ODI agent scheduler.

Just to recap that means with the custom scheduler solution we have the ability to create, update and delete schedules.

So how about viewing the active schedules, well you can certainly run the custom scripts like I have already shown to display the schedules but personally I don’t feel like is the nicest solution, what I did to overcome this was to create a report in FDMEE.

Before creating a report a SQL query was created and registered, the query was similar to one used in the update/delete scripts.

The query produces the following output that be used in the FDMEE report.

Once the query validated the generate XML button was selected, this produces an XML file which can be loaded into word using BI Publisher desktop to generate a template file.

Now that everything is in place a new report definition was created, the query, template and group were assigned to the definition, there were no need for any parameters for this report.

The report is available against the report group it has been assigned to.

When executing the report you have the option to generate in either PDF, HTML or Excel (XLSX)

I selected XLSX and depending on browser settings it will either open directly in excel or you will have the option to download and then open in excel.

So this can report can be generated at any time to provide up-to-date batch schedule information.

Finally on to running the activities such as creating a new schedule outside of the FDMEE UI, if you are not aware from it is possible to use a REST resource to run tasks such as data load rules, batches, custom scripts and reports, I have covered the functionality in previous blogs if you would like to understand it in more detail.

To run a custom script you can make a post to the following URL format:


The body of the post will need to include in JSON format the job type which will be REPORT, the report format type which will be SCRIPT, the job name which will be the name of the custom script to run and the parameter display names and values.

An example to run the script to create a new batch schedule using a REST client would be:

After the post has been made a response will be returned with FDMEE process information such as ID and status.

If the status is in a running state a GET request can be made to keep checking the status.

You could then convert this into a script which could be called by an automated process and parameters passed in or say run from a user’s client machine instead of having to log into workspace.

I created the following example script using PowerShell which uses the REST resource to run the custom script to create a new schedule, I know it requires error trapping but it just to give an idea.

When the script is run, it takes user input to define the parameters of the schedule which it then converts into JSON, makes a post to the REST resource and then keeps checking the status until the process has successfully completed.

Running the report again to view the scheduled batches confirm that the new schedule has been created.

Job done, a full solution to create, update, delete and view schedules.