ACG Business Analytics Blog

Best Practices for IBM Planning Analytics (TM1) Integration

Posted by Matt Berson on Thu, Feb, 06, 2020 @ 08:27 AM

Effective integration of IBM Planning Analytics (TM1) with the existing infrastructure is usually one of the top priorities of organizations looking to use TM1. TM1 has a set of embedded tools that make the integration reasonably straight forward for any combination of cloud and on-premises interactions. The three most common options are outlined below – which one is the most optimal will vary by every situation depending on the criteria outlines at the end of this post.

TM1 Integration Options

The three most common integration options are the following:

ODBC Connection using TM1 Turbo Integrator

Turbo Integrator (TI) is a utility that lets you create processes to automate data importation, manage your metadata, and perform other administrative tasks in the TM1 environment.  TI also provides a powerful scripting language that is used by TM1.

The language can be used as an ETL tool as well as for other scripting and rule writing. Using TI, TM1 can leverage an ODBC connection to almost any relational database. Once the ODBC is established, TM1 integrate with the database to both load up data but also push data back as follows:

Pull Data into cubes with a defined SQL query.

This is the most common way to pull data into TM1. This process is pretty efficient and fast, TM1 allows “multi-threading” to load data using parallel ports with significant speed. The uploads can be scheduled or executed on-demand as necessary. This approach is commonly used to load GL Actual into cubes during the monthly close process, often times as frequently as every 15 minutes during critical times.

Push data back into relational tables using the “ODBCOutput” function.

This function allows for the execution of a SQL command for a defined ODBC connection. Unlike the pull mentioned above, the ODBCOutput function is not very sophisticated. It executes and commits a single SQL command at a time (i.e. a single insert command). It is generally slow and is normally only appropriate for smaller data volumes

The TI scripting in both of the two mechanisms above is normally very simple and only requires a basic understanding of TI functions. The TM1 GUI will help with a lot of the configuration. Users can also layer in additional logic to transform / manipulate the data as required by the implementation.

Push-Pull Using Flat Files

Using TI, there is also the ability to read and create flat files. Similar to ODBC connections, this can be used for both incoming and outgoing data. Data coming into TM1 is normally done through ODBC connections; however, due to the limitations of the ODBCOutput function, this is a very common solution to push data out of TM1 to a relational database. After TM1 pushes to a flat file, the relational database will execute a load of the flat file. These processes can be orchestrated through schedules, trigger files, custom script or whatever the tool of choice is.

There are a couple of downsides to this approach:

  • It is very reliant on orchestration tools or remote calls to make sure the jobs are running in tandem / are synchronized
  • In the current environment, the data integration frequently involves cloud-to-on-prem OR cloud-to-cloud data transfer, which creates an extra layer of complexity to manage file transfer credentials and tokens

Leveraging the Rest API

The API option is becoming more common and opens up some options for a single tool to manage the push-pull of data. Upload data into TM1 using the rest API is pretty straight forward. For reverse integration (pushing data from TM1 to a DW), a relational table can request data from the TM1 cubes using the TM1 REST API. The data will be returned in a JSON response that can then be parsed and handled in a variety of ways. Some examples are the following:

  • Script everything using Python including the REST API call, parsing the response, and inserting into a SQL table
  • Using the Linux CURL function, utilize the REST API URL calls
  • Use standard JSON Parsing tools like Oracle JSON Parser to easily translate the return value into Oracle

More information about the TM1 REST API can be found here

ODBO Connection (TM1 Integration with Multi-Dimensional sources)

One additional option, but certainly less leveraged option is using the ODBO connection to other multi-dimensional sources.  This could be used to connect to other multi-dimensional repositories such as Essbase, MS (I think this has been re-branded to SQL Server Analysis Services ) Analysis Services or even other TM1 Server instances.  From there MDX queries can be used in a similar fashion as SQL is it used within ODBC connections to retrieve data.  This approach could be used to remove the "middle" man and additional effort to extract formatted information from existing systems and/or perform automated validations between related systems.

What Determines the Best Approach

What is the best option to choose will vary across organizations and use cases and will have to be decided in the proper business context. Some of the key questions / determinants are the following:

  1. How frequent is the data transmission
  2. What is the timing of the data transmission?
  3. What is the volume of each transmission?
  4. What and where are my data sources?
  5. Is the organization investing time and resources in better data orchestration between disparate tools?

Integated Budgeting Solutions

Topics: IBM Cognos TM1, IBM Planning Analytics

Integrated Budgeting for Balance Sheet and Cash Flow

Posted by Rob Harag on Mon, Jan, 27, 2020 @ 07:10 AM

The primary focus of budgeting and forecasting at a majority of companies is the Income Statement and associated KPIs (Revenue, EBITDA, Net Income). Significant investments are often made to increase the accuracy of the budget and forecast via driver based modeling for key P&L line items such as Compensation, CAPEX, etc. Planning for the P&L is distributed across the enterprise to collect assumptions that drive business performance. Budgeting and forecasting of Balance Sheet and Cash position is often times done offline by a smaller group in finance.

Meanwhile - Balance Sheet and, more specifically, Cash are critical business variables that need to be understood and closely managed. This is even more important in fast growing companies that may rely on limited sources of funding to help fuel their growth before they achieve break-even point. Any economic growth that is modeled in the Income Statement can only be achieved if there are adequate sources of funding in the long term and cash balance can be sustained. A number of our customers expressed that understanding their cash balance over the medium term was one of their top priorities when installing a new finance system.

An Integrated Financial Budgeting solution that connects the dots across the financials can create a lot of efficiency and increase insight. A simplified example of a model that we used with a number of customers can be seen in this video. Using an interconnected set of inputs and assumptions that are driven by the end user, every financial assumption automatically translates to the P&L and balance sheet and shows the resulting impact on cash. An actual model will support much more depth with a much larger variety of assumptions, some of the sample transactions shown here are the following:

  • Modeling of revenue and how that automatically translates into Accounts Receivable
  • Relationship between account receivable and cash based on user-driven assumptions for cash collection
  • Budgeting for individual line items such as Insurance Prepayments and the related impact of amortizing the balance into expenses over time
  • Purchases of individual assets with impact on cash as well as P&L and retained earnings based on user driven payment and depreciation assumptions
  • A Balance Sheet walk-forward to explain the changes in balances month-over-month by showing additions and reductions in balances by account line item.

This model is powered by IBM Planning Analytics (TM1), which is uniquely positioned to support such integrated financial modeling. Among many advantages, the key benefits when using IBM Planning Analytics (TM1) are the following

  • Real time (in-memory) calculations of results that facilitates effective modeling with changes reflected and aggregated for immediate review and analysis
  • Scalability – the model will scale to a significant volume of data and relationships and will support complex modeling needs without any performance impact
  • Ability to configure the assumptions and account inter-relationships by the end user to arrive at the desired model without relying on IT for development and system changes
  • Scenario planning – IBM PA provides unlimited number of versions and scenarios that can be run in parallel to test various scenarios of assumptions and their respective impact on cash in the short or long term
  • Intuitive and friendly user interface with XLS and Web options for analysis, reporting, dashboarding and visualization

Learn more about about ACG's integrated budgeting solutions by clicking the button below.

Integated Budgeting Solutions

Topics: IBM Cognos TM1, IBM Planning Analytics

IBM PAX - The 4 Report Types and How to Use Them

Posted by Rob Harag on Thu, Dec, 19, 2019 @ 11:05 AM

IBM Planning Analytics for Excel (PAX) gives users four different report types to choose from depending on the type of report or analysis they would like to create. Some of the report types are new and represent a change from the former Perspectives of CAFE formats. This post is a high level overview and consideration - for a more detailed discussion download our free 4-page guide that includes descriptions of the reports, pros and cons, and so on. 

The four basic views or report types in IBM Planning Analytics for Excel (PAX) are the following:


This is the most free-form type of report that is best used for analytics and “Exploration”. It allows for easy pivoting, drill down in rows and columns and makes it really easy to analyze variances and drill down to understand drivers of performance. It is also very effective as the first step to set up a desired view that can then be converted for one of the other report types for further use

Quick Report

This is a new view for IBM and is most similar to the traditional “Block Retrieve” that will be familiar to legacy Essbase users or others using similar tools. Report is defined as block of values that are included in rows, columns and context, the numbers retrieve in a block. Quick reports are not well suited for free form analytics and the drill down capability is fairly clunky, however, they are great for standardized reporting, allow for multiple cubes / reports to be combined on a single page and can be used to work offline with subsequent submission of values into the system.

Custom Report

This is the legacy “Perspectives” reports from the old TM1 platform. This report is the most flexible to allow any combination of values to be included on the page. Each cell is a specific formula that draws from a set of defined values and dimensions. The report provides unlimited configuration and formatting flexibility and is probably the most flexible in terms of analytics, standardized reporting as well as custom view and report creation.

Dynamic Report

Finally – this is the old “Active form” from TM1 – it is similar to the Custom report but allows for drill down on rows and the ability to navigate the hierarchy and flex with any changes in the underlying structures.

Given this many options, it can be difficult for users to decide what report type to use in which particular scenario. For example if you are looking for standardized reporting, is the Quick or Custom report better? What format should you use for input templates? When do you use a Dynamic report vs a Custom report?

The decision has to be made in the proper business context and will generally fall into three categories:

Standardized Reporting

A Custom Report will provide the most flexibility in terms of report structure, formatting and adding external items such as calculations, visualizations, etc. It is the format of choice for many users to build their reporting package. As a cell-based retrieve, it will facilitate any combination of cells and values, allows for asymmetric reports that need to combine different values in rows and columns for functional reporting, etc. The report is very stable and will maintain it integrity even after significant customization.

The second option for reporting is Quick Report. As a block-based retrieve, it does not include any formulas and thus will be most performant of all. It will facilitate custom formatting, allows insertion of rows and columns and also including of custom calculations and values. Multiple reports can be combined on a single page and the format supports virtual hierarchies. Quick Report is convenient as it can be distributed directly users without system access – a Custom Report will need to first be saved as “values” before distribution thus requiring an extra step.

Budgeting and Forecasting

Just like for reporting, Custom and Quick reports are the best choices. A Quick Report will provide the most flexibility. Input templates tend to be pretty well defined, locked down, and therefore do not need a lot of flexibility or customization. A Quick Report can be set up and formatted as needed. It provides the ability to work offline and submit all changes upon re-connecting to the system. It supports block upload of information – users can copy and paste blocks of content from another book / area for upload. It will work with Excel formulas and support calculations.

Analytics and Explorations

The “Exploration” is the best choice for free form analytics. It allows drag and drop of dimensions into rows, columns and page context, drill down on rows and columns, leverages subset edits with all static and dynamic subsets, supports virtual hierarchies, etc. This is the “Ultimate Pivot Table” and for the pivoting enthusiasts out there it will open the door to unbounded analytics. There is no better mechanism for analyzing variances and drilling down to detail to understand the underlying drivers and causes.

For additional detail, download our free 4-page guide on the various use cases and guide for selection.

Download Guide

Topics: IBM Cognos TM1, IBM Planning Analytics

The Future of IBM Planning Analytics - IBM Data and AI Forum Debrief

Posted by Peter Edwards on Thu, Oct, 31, 2019 @ 08:58 AM

It was an exciting set of news and updates around IBM PA at the IBM Data and AI Conference in Miami last week. Through a variety of presentations by IBM Offering Management as well is in private meetings with IBM executives, we felt a renewed buzz and commitment to the platform by senior business executives at IBM. It is clear that the system is front and center in the overall Data and AI strategy and IBM continues to commit resources to further the platform development.

We feel that IBM, as promised, did a great job to continue enhancing the platform over the last 6-12 months. These tactical, step-by-step updates, were focused on enhancing the core set of features and capabilities as well as improving the performance and stability of the platform. The roadmap includes a number of more strategic updates and we feel that many of them are well along the way. While IBM did not provide any specific timelines or deadlines for individual deliverables, we would not be surprised if there will be a lot of accomplishment to discuss by the IBM Think conference in San Francisco in May 2020.

A high level outline of the key areas of focus is the following:

The “TM1” Name is Back!

In a move that is widely applauded by the core User and Business Partner base, IBM is bringing the “TM1” name back. The name has great recognition across the user base globally. Over and over we continue getting the question “What is the difference between IBM PA and TM1”. This marketing adjustment will bring more clarity to the platform and reduce confusion. The new marketing tagline is “IBM Planning Analytics powered by IBM TM1”.

IBM Planning Analytics Workspace (PAW)

Some of the key recent enhancements to PAW include rich text formatting, action buttons, wider variety of fonts, drill up on visualization, corporate themes and other features. In addition to ongoing updates to core features, focus continues on making Workspace the primary modeling interface and close the gap to Architect. At this stage, a lot of the capabilities are in place with only minor items remaining (cube locking, drill definition and others).

IBM Planning Analytics for Excel (PAX)

The Excel add-in has been significantly improved and stabilized over the last 3-6 months. It seems that major gaps to Perspectives have been substantively closed. Key future focus areas include use of virtual hierarchies in all reports, adding relative proportional spread as an option, further enhancing flexibility of Quick reports as well as continued tackling of small fixes and enhancement requests

Integration with Cognos Analytics (BI)

Further integrating IBM PA and CA across all platforms and configurations (On-Prem, Cloud Dedicated, Cloud Standard etc) is a top priority. With native integration in place, the ability to cross-leverage the platforms across all deployment options is still a work in progress from both technical as well as licensing perspective. Efforts to provide on-demand integration as well as give individual users access will provide great price flexibility to users with both platforms.

Cloud Capacity Increase

To accommodate larger customers and more complex model in the Cloud, IBM provides additional memory options beyond 512GB with as much as a 2TB capacity limit as part of a standard offering. While we never really saw the current memory limit as a model limitation, with the cloud adoption gaining further momentum this will make migration of large enterprise customers more viable.

Guided Navigation / Modeling

After a long time discussing various approaches to Workflow and collaboration, it seems that IBM is taking a simplified and more intuitive / faster approach to the problem. Based on some of the wireframe pictures and views it seems that this will be a simple and intuitive way to provide a customized work environment with guided navigation. It will provide more of an “Application” look and feel that will offer more configuration options to the end users and will be more intuitive to deploy. That in turn will drive adoption and reduce time to market, especially for smaller players.

 Licensing Options and Overall Value

IBM is introducing a number of new licensing options that will likely make the platform even more accessible to small companies and provide a cost effective option to adopt and grow the application starting with a small user base. In addition to this, IBM introduced a number of competitive pricing offers and sales plays in 2019 that will make the option very attractive.

Topics: IBM Cognos TM1, IBM Planning Analytics

Using REST API with IBM Planning Analytics

Posted by Brian Plaia on Tue, Jun, 25, 2019 @ 02:46 PM

What is the TM1 REST API

The TM1 REST API is a relatively unexplored method of interacting with TM1 and allow external applications access to server objects directly.  Introduced in version 10.2 and constantly being improved upon with each Planning Analytics release, the TM1 REST API conforms to the OData V4 standard. This means that the REST API returns standard JSON objects which can be easily manipulated in most programming languages. This is a change from previous proprietary TM1 APIs (such as the VBA /Excel and Java APIs) that required handling of custom TM1 objects.

Currently almost all administrative aspects of the TM1 server, which are available in Architect and even some that are not, such as retrieving thread status and canceling threads (functionality provided by TM1Top) can be handled using the REST API.  Functions and actions are available to create cubes, dimensions, subsets, hierarchies, TI processes and rules as well as monitor transaction logging and even commit TI code to a GIT repository.

How do I Enable the REST API?

First you have to ensure that your TM1 version supports the REST API.  It was introduced in version 10.2, however a number of key features are not available before 10.2.2 FP5, so that is the recommended minimum version.

To be able to access an instance via the REST API, the tm1s.cfg file parameter “HTTPPortNumber” must be set to a unique port.  The instance may need to be recycled prior to use as this is a static parameter.

What Can’t I Do with the REST API?

At its core, the TM1 REST API is not end-user ready.  It is merely a tool for developers to interact with TM1 from external applications/languages and more easily integrate TM1 into your data stream.

It is possible to create end-user consumables (such as dashboards and reports that are referencing / interacting with data stored in TM1 cubes) using the REST API as the connection to TM1, however it should be noted that these solutions will have to be custom-built in another language (such as HTML, PHP, Python, Java, Powershell or a combination of some languages) or built on top of PAW (which in the back-end uses the REST API to connect to TM1).

How is the REST API being used in the real world currently?

The REST API is the back-end connection for both IBM Planning Analytics Workspace (PAW) and IBM Planning Analytics for Excel (PAX).  Any reports/dashboards built in these tools will use the REST API seamlessly by default with no additional configuration/development required.

A majority of the current use case of the REST API in Production-level deployments at a number of clients is to remotely execute TI processes in a multi-threaded fashion.  There is a benefit in using the REST API to kick off parallel TI processes over the old TM1RunTI.exe utility in that the REST API will avoid CAM logon lock contention, as well as avoiding the need to build a custom thread tracking mechanism.  When coupled with the “async” library within Python, the Python script is able to manage thread limits and overall execution status tracking without the need for building flag files into your TI’s.

In addition to using the REST API to kick off TI processes, there are other clients which are using it for minor data sync’s between instances.  For instance, two different TM1 instances which have copies of the same exact cube structure.  Rather than using flat files and custom TI processes to push data between instances, a simple Python script to create and push a cellset from source instance to target instance accomplishes the task without the need for a third-party software to execute the processes on both instances in sequence.

Topics: IBM Cognos TM1, IBM Planning Analytics, Rest API

Adding Single Sign-On to IBM Planning Analytics Local

Posted by Patrick Mauro on Sat, Jun, 01, 2019 @ 09:07 AM

Adding single sign-on capabilities is a great way to get the most out of IBM Planning Analytics Local, whether it’s part of an upgrade, or simply to enhance an existing install – in fact, it is often a business requirement for many clients. Unfortunately, however, the full procedure is not well documented by IBM – while bits and pieces are available, crucial information is often omitted. As a result, many companies struggle to implement the feature properly, if at all.

To help alleviate this problem, we have created a detailed guide on how to properly implement SSO based on our experience, which can be accessed here. This 13-page document includes detailed screenshots, links to any external docs or software needed, code block examples, etc.

What follows is a brief summary of this process – essentially, the configuration consists of three main steps:

  • Install and configure IIS
  • Install and configure Gateway
  • Edit Configurations and Finalize

 Step 1: Install and Configure IIS

Internet Information Services (IIS) is Microsoft’s native web server software for Windows Server, which can be configured to serve a variety of different roles. In this context, it is required in order for the Planning Analytics server software to authenticate users based on Windows account information – so this must be set up first.

Available documentation often neglects to mention that this is a pre-requisite, and does not provide adequate information on what specific role services will be required – our documentation provides all details needed for setup. For additional information about IIS setup, Microsoft’s own IIS documentation may also be of use:

 2: Install and Configure Cognos Gateway

Cognos Gateway is an optional feature provided with Cognos Analytics (also known as BI), and is required in order to communicate with the IIS services and facilitate user log-in. However, as of this writing, a “full” install of CA will not include this Gateway by default, and it is not possible to retroactively add this cleanly to an existing CA install. As a result, this will require a very careful separate install of only the Gateway component, but correctly configured to link to the existing CA instance.

However, even after Gateway is installed, the work is not done. Additional files must also be added to Gateway after it is installed: a group of deprecated files included in the web server install in a ZIP file, BI_INTEROP.ZIP. Documentation for CA will mention this is a requirement for Active Directory – this is also required for Gateway, but needs additional modifications that are not detailed in existing docs. All needed information on these modifications is provided in our guide.

3: Modify Configurations and Finalize

As part of the process above, we have essentially replaced the existing access point for CA with a new one. As such, the final step here is the easiest: going into all the existing configuration files, and making sure all the references are accurate and includes the new URIs, to ensure the system is properly communicating between each of the components.

 Once each of the configurations has been adjusted properly, we should be all set: there should no longer be any user login prompt for PAX, Architect, CA, TM1Web or PAW. That said, we include some additional features you might consider adding to enhance this even further in our guide.

You can access the full 13-page document at this link – but if you have any additional questions, feel free to contact ACG at, or give our offices a call at (973) 898-0012. We’d be happy to help clarify anything we can!

Topics: IBM Cognos TM1, IBM Planning Analytics

Working with the Audit Log and Transaction Log in IBM Planning Analytics (IBM Cognos TM1)

Posted by Andrew Burke on Fri, Apr, 12, 2019 @ 02:59 PM

IBM Planning Analytics maintains a detailed log of all changes in the system caused by both users and processes, including changes in values or core structures. Maintaining a full transparency is critical to facilitate key business and control requirements, including the following:

  • Understanding the drivers of changes from prior versions
  • Ability to roll-back / restore previous values in case of errors or system issues
  • Provide an audit trail for all system changes for control and compliance purposes
  • Obtain better insight into status of deliverables beyond what is available via Workflow
  • Understand the system usage for license compliance and ROI measurement

Two Types of Logging

There are two types of logs in IBM Planning Analytics. Both can be enabled or turned off by the system administrator as required based on existing requirements:

  • Transaction Log - captures all changes to data caused by end users, rules or processes
  • Audit Log - captures changes to metadata, security, logon information and other system activity detail

A Transaction Log represents a simple list of all changes in values as well as the associated context such as cube name, time and date, user name, value pre- and post-change and the value of every dimension of the particular cube at the time of change. A typical Transaction Log looks something like this:

Trans Log

The Audit Log follows a similar concept, however include more verbal / contextual information.

Reporting of data changes

Once captured, all transactions in the log can be brought into the IBM Planning Analytics reporting view and filtered for a specific application (cube), time period, account group etc. to provide transparency of all changes made. Changes to structures and metadata (new accounts or other elements) are tracked in a similar view and can be included for reporting and analysis. The end view could look something like this:

Log report

Some of the other practical benefits of using the Log information include the following, as an example:

Getting Insight into System Usage

He audit log provides an easy view of how the system is used - users can select a particular time period and see which users logged in, when and how often - it will provide good visibility to ensure compliance and efficiency from a licensing perspective but also ensure that the company is getting ROI from the investment and people are using the systems to do their work.

Deliverable Status Check

As a complement to IBM Planning Analytics Workflow - the Audit Log will provide immediate visibility into the status of a particular deliverable. Say the objective is to complete a forecast by Friday 5pm - as of Wednesday that week, the Workflow application shows all submitting Cost Centers as "In Progress" with no further information. The audit log will reveal how many users actually logged in and did work in a particular cost center / application, even though the work is still "In Progress" and not ready for submission. While anecdotal, it provides a good insight to where approximately the organization is in the submission process and if the deadline is likely to be met.

These are some examples of useful application of this not-very-widely-known and understood feature. ACG has a set of tools and accelerators to help translate both the Transaction Log and Audit Log into useful views using IBM Planning Analytics that can be viewed and analyzed. Any thoughts / feedback / examples of use or other suggestions would be welcome.


Topics: IBM Cognos TM1, IBM Planning Analytics

Working with Virtual Dimensions in IBM Planning Analytics 2.0.

Posted by Theo Chen on Mon, Apr, 23, 2018 @ 02:01 PM

Ability to create virtual dimensions from attribute-based hierarchies is a key update in IBM Planning Analytics 2.0 and a huge source of value. This is a major differentiation for IBM PA compared to other similar platforms. It provides even greater flexibility to the user to analyze data using user-defined parameters on the fly. With virtual hierarchies, companies have much greater flexibility in designing solutions that will provide greater efficiency yet maintain the flexibility to include attributes for thorough analysis.

What are Virtual Dimensions

Virtual dimensions are dimensions that are created in an existing IBM Planning Analytics system on demand by selected end users (system administrators). These dimensions are not part of the core system structure / design, but rather are created ad-hoc as needed based on attributes of individual existing dimension elements. Attributes can be created as and when needed while the system is in use and there are no practical limitations to how many attributes can be created for every single member.

Once a virtual dimension is created, it can be used just like any other dimension. It can be selected for reporting, it can be brought into a cross-tab view for analysis or used as a target for input for (plan, forecast) data.

What Virtual Dimensions are NOT

Virtual dimensions in IBM PA are NOT the same as the ability to create alternate rollups / hierarchies or sorting / filtering by attributes. We find that the concept is a bit hard to understand initially and people default to the capabilities they know. Unlike filtering and alternate hierarchies, virtual dimensions go deeper and provide a more thorough way to analyze the data.


We have a sales reporting application with Customer and Business Unit being the two key dimensions. Other dimensions include Account, Time etc. The Customer dimension is organized by Industry – so Customers rolling up to Sectors rolling up to Industries that roll up to Total Customer.

Let’s say I want to understand my sales by size of company (Enterprise vs Mid-Market vs Small and Medium Enterprise, or SME) in addition to the industry rollup. I do not have “Company Size” defined as a dimension so under previous rules I would have the following options:

  • Redesign the cube to include “Size” as a dimension – depending on the size of the overall model that could be a significant undertaking and could take a lot of time
  • Build an alternate rollup of Customer by size – in this case I would have to choose the Industry or Size rollup for reporting and analysis but I could not use both and would still not get the desired cross-view
  • Embed “Size” inside the existing “Industry” rollup – this would be extremely inefficient as I would have to include multiple rollups within each Industry / Segment for Size and repeat them for each Industry / Segment and deal with conflicting element values – not desirable from maintenance perspective and yet I would still not get the same flexibility for analysis

Enter “Virtual Hierarchies” – with IBM Planning Analytics 2.0 I can simply create a new “Attribute” and label each Customer an Enterprise, Mid-Market firm or SME. Once that is done, I would turn this Attribute into a Virtual Hierarchy and it would show up in my “Set” menu. Once there, I can simply drag the “Size” Hierarchy into the cross-tab view and see the breakdown of customer by size and industry in a simple cross tab view. I am able to drill down or pivot to analyze the data and even input into the virtual intersections to post adjustments or forecast sales. And of course element security still works at the leaf level. Extremely powerful…

Key Benefits

The key benefit implication from this capability include the following:

  • Savings in (RAM) memory due to less number of core dimensions required and thus the size of the core model
  • Better overall performance and usability of the model with less clutter / complexity
  • Much greater flexibility to adjust existing models and get deeper insight
  • Ability to conform with model standards with any customization done locally

What to Look Out For

Some points to be aware of when working with virtual hierarchies include the fact that they can currently only be built with 2 levels – TurboIntegrator scripting is required for deeper hierarchies with more levels. Also, elements in virtual hierarchy are not elements in main itself – each element in virtual has to be unique from every element in dimension. There is no security (yet) for 'virtual' elements, although that should be addressed soon.

The impact of Virtual Hierarchies will be different for existing vs net new applications. For existing models, it will add flexibility through the ability to expand structures without going through a potentially substantial application redesign. For new solutions, it creates an opportunity to design models with more simplicity and rely on Virtual Dimensions to provide scalability and flexibility in the future.

Give us a call to discuss how this great new capability can add value to your platform and review options to provide more insight and analytical power.

Topics: TM1 Technology, IBM Cognos TM1, Performance Management, Financial Planning and Analysis, IBM Planning Analytics

Object Security Integration with IBM Cognos TM1

Posted by James Doren on Sat, Jun, 27, 2015 @ 10:40 AM

One of the key security-related selling points within IBM Cognos TM1 is the tool’s ability to assign different security levels to different objects within the TM1 model for any given user or group. Properly handling object security interaction within a TM1 model can provide an unparalleled level of control over the system, user base, and data. As we have seen at many clients, it will improve the reliability of your data by ensuring that only the users qualified to change data at a given intersection point will be doing so, and allows for unique configurations by personalizing the security rights for each individual group. This added flexibility helps users get the most out of their TM1 model, so understanding exactly how it works is very important.

The security rights can be set up to gain the most control possible over what users can and cannot see, add, delete, or change. There are 4 different levels of object security which pertain to a user’s ability to manipulate data:

  • Cube Security
  • Dimension Security
  • Element Security
  • Cell Level Security

For the examples following, we will assume a single user is assigned to a single group. This way, when we talk about the ‘User’, we understand that this user is part of a TM1 security group. Let’s look at these different levels of object security in the simplest form. Say for example, a user is assigned READ access to a cube called ‘Planning Cube A’. If this is the sole security assignment, then the user will have READ access on all intersections of the cube. If a user is assigned READ access to a dimension called ‘Time’, then the user will have read access to all intersections involving the ‘Time’ dimension. Now let’s say that the only security assigned to a user is an Element level security on the ‘Time’ dimension. For example, the user is assigned READ access for the element ‘2015’ within the ‘Time’ dimension, which gives the user READ access ONLY to the element ‘2015’. The same concept applies to cell-level security.

Simple enough, right? Well, the tricky part lies within the way the security settings for these four different objects interact with each other when multiple levels of object security are assigned. Let us look at the example below:

You have a cube called ‘Planning Cube A’ with the following dimensions:

  1. Version
  2. Territory
  3. Account
  4. Year
  5. Measure

You assign a user READ access to the ‘Planning Cube A’ cube, but you also assign the same user WRITE access to all of the elements in this cube.

What do you think would happen in this example? In this case, the READ access of the cube will override the WRITE access of the elements, allowing the user to view the data within the cube but not allowing them to update any of the data.

Let us look at another example, assuming we are talking about the same cube with the same dimensions as the last example:

You assign a user WRITE access to the ‘Planning Cube A’ cube, WRITE access to all of the elements within all of the dimensions EXCEPT for the ‘Version’ dimension where you assign the user READ access to all of the elements within the ‘Version’ dimension.

In this case, the elements in the ‘Version’ dimension identify every intersection in the cube, and since the user is assigned READ access to all elements in this dimension, the user is unable to update data within the cube.

As stated by IBM, “When groups have security rights for a cube, those rights apply to all dimensions in the cube, unless you further restrict access for specific dimensions or elements”.

Why this type of interaction is useful

This type of object security interaction is essential for having a full level of control over your users and groups. By being able to set security access at different object levels, you are able to pinpoint and assign the security settings for a giver user exactly how you want them to be. Imagine this scenario, referring to the same cube/dimensions as the prior examples:

You wish to have all groups be able to view and read cube data for all territories specified in the ‘Territory’ dimension. You also want each group to be able to update cube data ONLY for their own territory and ONLY for ‘2015’.

By properly manipulating the different levels of object security, and utilizing the way they interact with each other, this can be accomplished quite easily with no coding and no additions to the security model.

To achieve this, we would take the following steps:

  1. Assign each Territory group WRITE access to the ‘Planning Cube A’ cube
  2. Assign each Territory group READ access to all territories (element level security) within the ‘Territory’ dimension that do not apply to that group
    • For example, assign the ‘North’ territory group read access to all elements except for ‘North’ in the Territory dimension
  3. Assign each group READ access to all elements except for the ‘2015’ element in the year dimension (element level security)

With this security scheme set up, users will be able to view all cube data, but will only be able to update cube data related to the territories applicable to them.

While these are just a few examples of how the interaction between object security works, there are many great applications that can come out of fully understanding how the relationships work. In summary, if a cube has READ access specified for a group, then even if the elements or cells are specified as WRITE access, users in the group will not be able to update any cube data. However, if a cube is given WRITE access for a specified group, all of the dimensions by default take the access of the cube security assignments. It is in this way, a sort of top-down approach, that specific intersections and elements can be restricted for a particular group.

Topics: TM1 Technology, IBM Cognos, IBM Cognos TM1

The Future of IBM Cognos TM1

Posted by Jim Wood on Wed, May, 13, 2015 @ 09:46 AM

IBM Cognos TM1 version 10 has now been around for a good while and has already gone through several minor and major updates. The move from version 10.1 to 10.2 itself was a significant change, even though it was within the same release stream. The key updates included the move from .Net to Java as the main platform and support for multi-threading queries.

Despite all these changes and updates, the underlying framework and structures remained the same. Despite the addition of CAFÉ, which delivered significant optimization of performance and flexibility for users over a wide area network, most interfaces have not changed for a number of years.

So the question has to be: What’s next for TM1?

IBM have been aware of some customer frustrations with the TM1 interfaces for quite some time. Performance Modeler and Application Web server were a step in the right direction but they were more aimed at the Cognos Enterprise Planning market for ease of transition. Interfaces such as Architect, which are still heavily relied upon, haven’t seen a major update since the start of version 9.

It seems that IBM plans to address these issues in the next release through a tool called “Prism”, which is currently in Alpha phase. It is not entirely clear and known what the new interface will look like as IBM is playing their cards close to their chest, however rumors have it that it will be a significant upgrade and will close the gap to some of the other analytics and visualization platforms. What has been confirmed thus far is that Prism will replace Architect and Perspectives.

IBM provided an overview of Prism at their IBM Vision 2015 conference in Orlando in May 2015

Is Prism going to be massive leap forward for TM1?

Hopefully yes. Even though only a small amount of information is available, it is clear that IBM is making a heavy push to further enhance the system and is determined to strengthen its position on the market by committing resources. This enhancement to the front-end of the tool would be a welcome and refreshing improvement that will go a long way in streamlining the overall usability. Time will tell if Prism will live up to its promise but the development seems to be moving in the right direction.

What do I need to do to make sure I’m aware of Prism will bring?

With the demonstration at IBM Vision 2015 in May, chances are a lot more information will be available in the near term as to the scope and look / feel of the system. You can always contact your IBM account representative / business partner and ask to be included on communication as new information becomes available. For those that want to be really close to the updates, you can sign up as Beta tester and gain a better understanding of what is coming and if / how it will impact your application and vision for the system.

Topics: TM1 Technology, IBM Cognos, IBM Cognos TM1, Performance Management

Contact ACG

Tel: (973) 898-0012

Latest Posts

Follow ACG