Saturday 31 December 2011

Planning - loading text member data update

A while back there was a post on the planning forum about loading text member data to a planning application using the Outline load utility, there was a reply on the post saying it was not possible in 11.1.2.x and an SR was raised and it had been confirmed as a bug. I knew it was possible in previous versions so I was going to test it out but never got round to it.

Then recently I received an email asking if it was possible to load text data using ODI to planning 11.1.2.1 as once again an SR had been raised and it had been confirmed as a bug. I knew I had definitely loaded text data when working on a previous project but that was with planning 11.1.1.3

I did write a blog on using an alternative method to loading text data because that was before it was possible directly with the ODI planning adaptor, I think it wasn’t possible in the first releases of 11 but it certainly was possible in 11.1.1.3

I thought I had best test out both the Outline load utility and ODI to see if it is possible to load text member data to a 11.1.2.1 planning application.

I first created an account member called TextMember1 with a data type of text.


I constructed a simple form so that I could view the results of load text data to the selected POV.


I set up the Data Load Settings in planning to match how I had set up the form with the Data Load Dimension as Period, Driver Dimension as Account with the member TextMember1


I made sure the Data Load Settings had definitely been correctly saved by running a SQL query against the planning applications relational tables.


I created a CSV file in the format required for the Outline load utility and populated it with the same POV details as the planning web form.


OutlineLoad /A:PLANSAMP /U:admin /M /I:textload.csv /D:Period /L:dataload.log /X:dataload.exc

The Outline load utility was run from command line using the load dimension set as Period to match the Data Load settings and CSV file.
 

The output log from the utility showed that 1 record was successfully processed and loaded.


The planning web form was run again and the text data was displayed successfully.

I know that you can also use the Outline utility to load data without having to set the Data Load setting in planning and specify the driver information directly in the source file so I thought I would give that a try as well.


I created and populated a new file using the Driver Member and Value column headings.


OutlineLoad /A:PLANSAMP /U:admin /M /I:textload2.csv /TR /L:dataload.log /X:dataload.exc

I successfully ran the utility again but this time removing the load dimension /D and used /TR which means the driver information is specified in the file.
 

The form displayed the newly loaded text data.

So no problems loading text data using the Outline load utility so time to move on to ODI, the email I received specified ODI 10.1.3.6.x with planning 11.1.2.1 so that is what my first test would be with.
 

I reversed the planning application and checked the columns “Data Load Cube Name”, “TextMember1” and “Point-of-View” had been added to the Period Datastore.


I created a file Datastore against the CSV file I used with the Outline load utility.


I created a new interface with the CSV file as the source and the Period dimension as the target, the target columns were then mapped to the source.

The flow was set as LKM File to SQL, the memory engine as the staging area and then “IKM SQL to Hyperion Planning”


The interface executed successfully and the web form was reloaded to show the correct text data.

So no problems using ODI 10g how about ODI 11.1.1.5


I replicated everything I had created in ODI 10g but this time used Oracle as the staging area due a bug using the memory engine with planning.
 

The interface ran with no problems and once again the correct text data was shown in the planning web form.

All testing was successful so at least I can put my mind at rest on this subject.

Sunday 11 December 2011

Loading to EPMA planning applications using interface tables – Part 6

In the last part of the series I went through the steps using the command line driven EPMA batch client to import from interface tables and then deploy to planning.

Today I am going to look at using ODI to control running of the batch client and add in some error handling functionality. Unfortunately there are no adaptors or API available to use with ODI and EPMA so I am going to use the OS command tool to run the EPMA batch client.

If you look back to part 4 of this series I had created a number of interfaces that loaded the planning metadata into the EPMA interface tables.


I ended up with the above package which I will expand on to bring in the batch client functionality.

Usually when designing packages or using tool like the OS command one I would add  in control using variables and not really hardcode but for today’s exercise I will just keep it simple which should make it easier to read.

I must also point out that my methods are not the only way of implementing and you may think to yourself there are better ways of achieving the end result and that is the beauty of using ODI as there are not always a right and wrong method. If you do feel you have come up with a better solution then feel free to get in touch to discuss.

If you are going to use the OS command with the EPMA batch client then the ODI agent will require access to the client, if you look back at the last part I gave advice on how to install the client on a supported OS so you have the option of either installing the client on a machine with an agent or install an agent on a machine hosting the client.


If you drag the OS command tool on the diagram in a package you will see there are a number of parameters to set, the most important parameter is “Command to execute” which is basically what you want ODI to execute at command line.

In the last part I executed the following from command line to import metadata from Interface tables to EPMA.

E:\Oracle\Middleware\EPMSystem11R1\products\Foundation\BPMA\EPMABatchClient\epma-batch-client.bat -C"F:\Scripts\Import.txt" -R"F:\Scripts\Output.log" -Uadmin -Ppassword

I am going to use the same import script and log so it is just a matter of transferring the command line to the parameter value.



The working directory has been set to the location of the batch client.

The command to execute calls the client and passes in parameters for the script to use, the log to output to and the account details.
Synchronous is set to Yes so that ODI will wait for a return code before continuing.

I have left the other parameters such as Output file and Error file as blank because the batch client will output all the information to the file Output.log


The EPMA batch client script is the same as from the last part in the series and logs in and imports from interface tables to EPMA based on a profile “SAMPLE_APP”

If you wanted to get clever you could actually create the script on the fly but that is not the objective for today.


As I mentioned earlier I would normally use variables like the example above, these can be set a runtime and make the integrations much more versatile.

So there we go end of blog, not so quick, what happens if there are errors in the output file generated by the batch client.

There are a number of different factors which can generate errors or failures.


A fatal error will cause the package to fail and in the example above I changed the working directory to an invalid location.

Fatal errors should be rare once you have defined all the correct parameters and can be acted upon easily e.g. send an email if the OS command fails or just simply check the Operator.

If say an incorrect password or profile is used then step will not fail.


In the example above I changed the import profile in the script to an invalid one, as you can see the step is successful but it has failed from viewing the log.

Now there are lots of different ways of dealing with this error e.g. create a batch script with error handling, reading the return code, parsing the file….

I am going to use a different method to the ones mentioned whether this is the right option or the best option then I don’t think I can answer that, I am sure somebody will come up with an annoying best practice statement to try and define what is correct.

If you take a look at the log file you will see that it is in a fixed format meaning that the start of each column is known by its position.

Now the idea is to load this log file into a table and then query it to see if there was failure which is defined by an ERROR line in the log.

First of I created a new file Data Store and pointed it to the Output.log file


The File Format was set to Fixed and no header line.


As the columns are fixed I could set the Start position of each column, I set the message column to a length of 200 which should be plenty


Viewing the Data Store shows that the columns are being separated correctly.

The next step was to create a simple interface that loads the log information from into a table.


The interface using “LKM File to SQL” and “IKM SQL Control Append” with the option to truncate the table each time it is run.

Now the log information is the table I created a refreshing variable to check if there was a failure.


When executed the number of errors will be stored in the variable, if the number is greater than zero then the script failed.


These are the steps in the package up to now, the first step uses the File delete tool and deletes the output log, this is because the EPMA batch client appends to the log and only the information from the last run is required.

The next step uses the OS command tool and calls the EPMA batch client and defined script, the output log is loaded to table and the refreshing variable stores the amount of errors.

The variable holding the numbers of errors can now be evaluated to see if the value is greater than zero.


If the value is greater than zero the step will complete with a success otherwise produce a failure so the output defines how to proceed.

I have decided that if there has been a failure then I want to store the failure message and then raise an error with the stored message which will cause the package to fail.




Another refreshing variable was created which queries the table holding the log information and stores the error message.


A new procedure was created with a step using Jython as the technology.
An error is raised with the message which has just been stored using the refreshing variable.

These steps are then added to the package, to add extra functionality an additional step could easily be inserted to send out a failure email and attach the output log.


If there any fatal errors in the log then the package will fail with a message informing of the reason.

You could move on deploying the EPMA application to planning using the same type of logic that I have used up to now in the package but there may be a possibility that there were no fatal errors in loading from the interface tables but there were normal errors generated, if there were errors you might not want to deploy the application just yet.


In this example I changed a member’s property so that it was set to a non-existent Smart List and ran the package again, the package was successful but there is an error in the log.

I don’t want the package to fail and I don’t want the application to be deployed but instead an email sent out informing of the error.


Time for another refreshing variable which will look for the line with the Import details and store the number defining the number of errors, I am only storing the first digit as I only interested to know whether it is greater than zero.

Like previously the variable can be evaluated and then steps adding depending on whether the result is true or false.


So now the log will be checked to see if there are any import errors, if there are errors then an email is sent attaching the log for information and the integration then ends, if there are no import errors the EPMA application is deployed to planning using an OS command.

I am not going to cover the deployment as it uses the same logic as the import and in the last part of the series I went through how to script it.




I added in the metadata load interfaces and some more additional error logging around the deployment from EPMA to planning which means in the end you have a solution that will take metadata from a source and end up to deploying to planning/essbase with relatively little coding that can then be easily scheduled or executed the web using the console.

Obviously you can make it a lot more sophisticated with added functionality and make more use of variables but hopefully you get the idea.

If the package does get complex and you are using ODI 11.1.1.5+ then you could look into whether using load plans would add any benefit.

I am going to leave it there for today, this was going to be the last part but I have had a few queries around using interface which I may cover if I find the time.