Blog

Home / Blog / Best Practices for IBM Planning Analytics (TM1) Integration

Best Practices for IBM Planning Analytics (TM1) Integration

Posted by Matt Berson on February 6, 2020

Effective integration of IBM Planning Analytics (TM1) with the existing infrastructure is usually one of the top priorities of organizations looking to use TM1. TM1 has a set of embedded tools that make the integration reasonably straight forward for any combination of cloud and on-premises interactions. The three most common options are outlined below – which one is the most optimal will vary by every situation depending on the criteria outlines at the end of this post.

TM1 Integration Options

The three most common integration options are the following:

ODBC Connection using TM1 Turbo Integrator

Turbo Integrator (TI) is a utility that lets you create processes to automate data importation, manage your metadata, and perform other administrative tasks in the TM1 environment.  TI also provides a powerful scripting language that is used by TM1.

The language can be used as an ETL tool as well as for other scripting and rule writing. Using TI, TM1 can leverage an ODBC connection to almost any relational database. Once the ODBC is established, TM1 integrate with the database to both load up data but also push data back as follows:

Pull Data into cubes with a defined SQL query.

This is the most common way to pull data into TM1. This process is pretty efficient and fast, TM1 allows “multi-threading” to load data using parallel ports with significant speed. The uploads can be scheduled or executed on-demand as necessary. This approach is commonly used to load GL Actual into cubes during the monthly close process, often times as frequently as every 15 minutes during critical times.

Push data back into relational tables using the “ODBCOutput” function.

This function allows for the execution of a SQL command for a defined ODBC connection. Unlike the pull mentioned above, the ODBCOutput function is not very sophisticated. It executes and commits a single SQL command at a time (i.e. a single insert command). It is generally slow and is normally only appropriate for smaller data volumes

The TI scripting in both of the two mechanisms above is normally very simple and only requires a basic understanding of TI functions. The TM1 GUI will help with a lot of the configuration. Users can also layer in additional logic to transform / manipulate the data as required by the implementation.

Push-Pull Using Flat Files

Using TI, there is also the ability to read and create flat files. Similar to ODBC connections, this can be used for both incoming and outgoing data. Data coming into TM1 is normally done through ODBC connections; however, due to the limitations of the ODBCOutput function, this is a very common solution to push data out of TM1 to a relational database. After TM1 pushes to a flat file, the relational database will execute a load of the flat file. These processes can be orchestrated through schedules, trigger files, custom script or whatever the tool of choice is.

There are a couple of downsides to this approach:

  • It is very reliant on orchestration tools or remote calls to make sure the jobs are running in tandem / are synchronized
  • In the current environment, the data integration frequently involves cloud-to-on-prem OR cloud-to-cloud data transfer, which creates an extra layer of complexity to manage file transfer credentials and tokens

Leveraging the Rest API

The API option is becoming more common and opens up some options for a single tool to manage the push-pull of data. Upload data into TM1 using the rest API is pretty straight forward. For reverse integration (pushing data from TM1 to a DW), a relational table can request data from the TM1 cubes using the TM1 REST API. The data will be returned in a JSON response that can then be parsed and handled in a variety of ways. Some examples are the following:

  • Script everything using Python including the REST API call, parsing the response, and inserting into a SQL table
  • Using the Linux CURL function, utilize the REST API URL calls
  • Use standard JSON Parsing tools like Oracle JSON Parser to easily translate the return value into Oracle

More information about the TM1 REST API can be found here

ODBO Connection (TM1 Integration with Multi-Dimensional sources)

One additional option, but certainly less leveraged option is using the ODBO connection to other multi-dimensional sources.  This could be used to connect to other multi-dimensional repositories such as Essbase, MS (I think this has been re-branded to SQL Server Analysis Services ) Analysis Services or even other TM1 Server instances.  From there MDX queries can be used in a similar fashion as SQL is it used within ODBC connections to retrieve data.  This approach could be used to remove the "middle" man and additional effort to extract formatted information from existing systems and/or perform automated validations between related systems.

What Determines the Best Approach

What is the best option to choose will vary across organizations and use cases and will have to be decided in the proper business context. Some of the key questions / determinants are the following:

  1. How frequent is the data transmission
  2. What is the timing of the data transmission?
  3. What is the volume of each transmission?
  4. What and where are my data sources?
  5. Is the organization investing time and resources in better data orchestration between disparate tools?

Integated Budgeting Solutions

Topics: IBM Cognos TM1, IBM Planning Analytics