ACG Business Analytics Blog

Best Practices for IBM Planning Analytics (TM1) Integration

Posted by Matt Berson on Thu, Feb, 06, 2020 @ 08:27 AM

Effective integration of IBM Planning Analytics (TM1) with the existing infrastructure is usually one of the top priorities of organizations looking to use TM1. TM1 has a set of embedded tools that make the integration reasonably straight forward for any combination of cloud and on-premises interactions. The three most common options are outlined below – which one is the most optimal will vary by every situation depending on the criteria outlines at the end of this post.

TM1 Integration Options

The three most common integration options are the following:

ODBC Connection using TM1 Turbo Integrator

Turbo Integrator (TI) is a utility that lets you create processes to automate data importation, manage your metadata, and perform other administrative tasks in the TM1 environment.  TI also provides a powerful scripting language that is used by TM1.

The language can be used as an ETL tool as well as for other scripting and rule writing. Using TI, TM1 can leverage an ODBC connection to almost any relational database. Once the ODBC is established, TM1 integrate with the database to both load up data but also push data back as follows:

Pull Data into cubes with a defined SQL query.

This is the most common way to pull data into TM1. This process is pretty efficient and fast, TM1 allows “multi-threading” to load data using parallel ports with significant speed. The uploads can be scheduled or executed on-demand as necessary. This approach is commonly used to load GL Actual into cubes during the monthly close process, often times as frequently as every 15 minutes during critical times.

Push data back into relational tables using the “ODBCOutput” function.

This function allows for the execution of a SQL command for a defined ODBC connection. Unlike the pull mentioned above, the ODBCOutput function is not very sophisticated. It executes and commits a single SQL command at a time (i.e. a single insert command). It is generally slow and is normally only appropriate for smaller data volumes

The TI scripting in both of the two mechanisms above is normally very simple and only requires a basic understanding of TI functions. The TM1 GUI will help with a lot of the configuration. Users can also layer in additional logic to transform / manipulate the data as required by the implementation.

Push-Pull Using Flat Files

Using TI, there is also the ability to read and create flat files. Similar to ODBC connections, this can be used for both incoming and outgoing data. Data coming into TM1 is normally done through ODBC connections; however, due to the limitations of the ODBCOutput function, this is a very common solution to push data out of TM1 to a relational database. After TM1 pushes to a flat file, the relational database will execute a load of the flat file. These processes can be orchestrated through schedules, trigger files, custom script or whatever the tool of choice is.

There are a couple of downsides to this approach:

  • It is very reliant on orchestration tools or remote calls to make sure the jobs are running in tandem / are synchronized
  • In the current environment, the data integration frequently involves cloud-to-on-prem OR cloud-to-cloud data transfer, which creates an extra layer of complexity to manage file transfer credentials and tokens

Leveraging the Rest API

The API option is becoming more common and opens up some options for a single tool to manage the push-pull of data. Upload data into TM1 using the rest API is pretty straight forward. For reverse integration (pushing data from TM1 to a DW), a relational table can request data from the TM1 cubes using the TM1 REST API. The data will be returned in a JSON response that can then be parsed and handled in a variety of ways. Some examples are the following:

  • Script everything using Python including the REST API call, parsing the response, and inserting into a SQL table
  • Using the Linux CURL function, utilize the REST API URL calls
  • Use standard JSON Parsing tools like Oracle JSON Parser to easily translate the return value into Oracle

More information about the TM1 REST API can be found here

ODBO Connection (TM1 Integration with Multi-Dimensional sources)

One additional option, but certainly less leveraged option is using the ODBO connection to other multi-dimensional sources.  This could be used to connect to other multi-dimensional repositories such as Essbase, MS (I think this has been re-branded to SQL Server Analysis Services ) Analysis Services or even other TM1 Server instances.  From there MDX queries can be used in a similar fashion as SQL is it used within ODBC connections to retrieve data.  This approach could be used to remove the "middle" man and additional effort to extract formatted information from existing systems and/or perform automated validations between related systems.

What Determines the Best Approach

What is the best option to choose will vary across organizations and use cases and will have to be decided in the proper business context. Some of the key questions / determinants are the following:

  1. How frequent is the data transmission
  2. What is the timing of the data transmission?
  3. What is the volume of each transmission?
  4. What and where are my data sources?
  5. Is the organization investing time and resources in better data orchestration between disparate tools?

Integated Budgeting Solutions

Topics: IBM Cognos TM1, IBM Planning Analytics

Integrated Budgeting for Balance Sheet and Cash Flow

Posted by Rob Harag on Mon, Jan, 27, 2020 @ 07:10 AM

The primary focus of budgeting and forecasting at a majority of companies is the Income Statement and associated KPIs (Revenue, EBITDA, Net Income). Significant investments are often made to increase the accuracy of the budget and forecast via driver based modeling for key P&L line items such as Compensation, CAPEX, etc. Planning for the P&L is distributed across the enterprise to collect assumptions that drive business performance. Budgeting and forecasting of Balance Sheet and Cash position is often times done offline by a smaller group in finance.

Meanwhile - Balance Sheet and, more specifically, Cash are critical business variables that need to be understood and closely managed. This is even more important in fast growing companies that may rely on limited sources of funding to help fuel their growth before they achieve break-even point. Any economic growth that is modeled in the Income Statement can only be achieved if there are adequate sources of funding in the long term and cash balance can be sustained. A number of our customers expressed that understanding their cash balance over the medium term was one of their top priorities when installing a new finance system.

An Integrated Financial Budgeting solution that connects the dots across the financials can create a lot of efficiency and increase insight. A simplified example of a model that we used with a number of customers can be seen in this video. Using an interconnected set of inputs and assumptions that are driven by the end user, every financial assumption automatically translates to the P&L and balance sheet and shows the resulting impact on cash. An actual model will support much more depth with a much larger variety of assumptions, some of the sample transactions shown here are the following:

  • Modeling of revenue and how that automatically translates into Accounts Receivable
  • Relationship between account receivable and cash based on user-driven assumptions for cash collection
  • Budgeting for individual line items such as Insurance Prepayments and the related impact of amortizing the balance into expenses over time
  • Purchases of individual assets with impact on cash as well as P&L and retained earnings based on user driven payment and depreciation assumptions
  • A Balance Sheet walk-forward to explain the changes in balances month-over-month by showing additions and reductions in balances by account line item.

This model is powered by IBM Planning Analytics (TM1), which is uniquely positioned to support such integrated financial modeling. Among many advantages, the key benefits when using IBM Planning Analytics (TM1) are the following

  • Real time (in-memory) calculations of results that facilitates effective modeling with changes reflected and aggregated for immediate review and analysis
  • Scalability – the model will scale to a significant volume of data and relationships and will support complex modeling needs without any performance impact
  • Ability to configure the assumptions and account inter-relationships by the end user to arrive at the desired model without relying on IT for development and system changes
  • Scenario planning – IBM PA provides unlimited number of versions and scenarios that can be run in parallel to test various scenarios of assumptions and their respective impact on cash in the short or long term
  • Intuitive and friendly user interface with XLS and Web options for analysis, reporting, dashboarding and visualization

Learn more about about ACG's integrated budgeting solutions by clicking the button below.

Integated Budgeting Solutions

Topics: IBM Cognos TM1, IBM Planning Analytics

IBM PAX - The 4 Report Types and How to Use Them

Posted by Rob Harag on Thu, Dec, 19, 2019 @ 11:05 AM

IBM Planning Analytics for Excel (PAX) gives users four different report types to choose from depending on the type of report or analysis they would like to create. Some of the report types are new and represent a change from the former Perspectives of CAFE formats. This post is a high level overview and consideration - for a more detailed discussion download our free 4-page guide that includes descriptions of the reports, pros and cons, and so on. 

The four basic views or report types in IBM Planning Analytics for Excel (PAX) are the following:

Exploration

This is the most free-form type of report that is best used for analytics and “Exploration”. It allows for easy pivoting, drill down in rows and columns and makes it really easy to analyze variances and drill down to understand drivers of performance. It is also very effective as the first step to set up a desired view that can then be converted for one of the other report types for further use

Quick Report

This is a new view for IBM and is most similar to the traditional “Block Retrieve” that will be familiar to legacy Essbase users or others using similar tools. Report is defined as block of values that are included in rows, columns and context, the numbers retrieve in a block. Quick reports are not well suited for free form analytics and the drill down capability is fairly clunky, however, they are great for standardized reporting, allow for multiple cubes / reports to be combined on a single page and can be used to work offline with subsequent submission of values into the system.

Custom Report

This is the legacy “Perspectives” reports from the old TM1 platform. This report is the most flexible to allow any combination of values to be included on the page. Each cell is a specific formula that draws from a set of defined values and dimensions. The report provides unlimited configuration and formatting flexibility and is probably the most flexible in terms of analytics, standardized reporting as well as custom view and report creation.

Dynamic Report

Finally – this is the old “Active form” from TM1 – it is similar to the Custom report but allows for drill down on rows and the ability to navigate the hierarchy and flex with any changes in the underlying structures.

Given this many options, it can be difficult for users to decide what report type to use in which particular scenario. For example if you are looking for standardized reporting, is the Quick or Custom report better? What format should you use for input templates? When do you use a Dynamic report vs a Custom report?

The decision has to be made in the proper business context and will generally fall into three categories:

Standardized Reporting

A Custom Report will provide the most flexibility in terms of report structure, formatting and adding external items such as calculations, visualizations, etc. It is the format of choice for many users to build their reporting package. As a cell-based retrieve, it will facilitate any combination of cells and values, allows for asymmetric reports that need to combine different values in rows and columns for functional reporting, etc. The report is very stable and will maintain it integrity even after significant customization.

The second option for reporting is Quick Report. As a block-based retrieve, it does not include any formulas and thus will be most performant of all. It will facilitate custom formatting, allows insertion of rows and columns and also including of custom calculations and values. Multiple reports can be combined on a single page and the format supports virtual hierarchies. Quick Report is convenient as it can be distributed directly users without system access – a Custom Report will need to first be saved as “values” before distribution thus requiring an extra step.

Budgeting and Forecasting

Just like for reporting, Custom and Quick reports are the best choices. A Quick Report will provide the most flexibility. Input templates tend to be pretty well defined, locked down, and therefore do not need a lot of flexibility or customization. A Quick Report can be set up and formatted as needed. It provides the ability to work offline and submit all changes upon re-connecting to the system. It supports block upload of information – users can copy and paste blocks of content from another book / area for upload. It will work with Excel formulas and support calculations.

Analytics and Explorations

The “Exploration” is the best choice for free form analytics. It allows drag and drop of dimensions into rows, columns and page context, drill down on rows and columns, leverages subset edits with all static and dynamic subsets, supports virtual hierarchies, etc. This is the “Ultimate Pivot Table” and for the pivoting enthusiasts out there it will open the door to unbounded analytics. There is no better mechanism for analyzing variances and drilling down to detail to understand the underlying drivers and causes.

For additional detail, download our free 4-page guide on the various use cases and guide for selection.

Download Guide

Topics: IBM Cognos TM1, IBM Planning Analytics

The Future of IBM Planning Analytics - IBM Data and AI Forum Debrief

Posted by Peter Edwards on Thu, Oct, 31, 2019 @ 08:58 AM

It was an exciting set of news and updates around IBM PA at the IBM Data and AI Conference in Miami last week. Through a variety of presentations by IBM Offering Management as well is in private meetings with IBM executives, we felt a renewed buzz and commitment to the platform by senior business executives at IBM. It is clear that the system is front and center in the overall Data and AI strategy and IBM continues to commit resources to further the platform development.

We feel that IBM, as promised, did a great job to continue enhancing the platform over the last 6-12 months. These tactical, step-by-step updates, were focused on enhancing the core set of features and capabilities as well as improving the performance and stability of the platform. The roadmap includes a number of more strategic updates and we feel that many of them are well along the way. While IBM did not provide any specific timelines or deadlines for individual deliverables, we would not be surprised if there will be a lot of accomplishment to discuss by the IBM Think conference in San Francisco in May 2020.

A high level outline of the key areas of focus is the following:

The “TM1” Name is Back!

In a move that is widely applauded by the core User and Business Partner base, IBM is bringing the “TM1” name back. The name has great recognition across the user base globally. Over and over we continue getting the question “What is the difference between IBM PA and TM1”. This marketing adjustment will bring more clarity to the platform and reduce confusion. The new marketing tagline is “IBM Planning Analytics powered by IBM TM1”.

IBM Planning Analytics Workspace (PAW)

Some of the key recent enhancements to PAW include rich text formatting, action buttons, wider variety of fonts, drill up on visualization, corporate themes and other features. In addition to ongoing updates to core features, focus continues on making Workspace the primary modeling interface and close the gap to Architect. At this stage, a lot of the capabilities are in place with only minor items remaining (cube locking, drill definition and others).

IBM Planning Analytics for Excel (PAX)

The Excel add-in has been significantly improved and stabilized over the last 3-6 months. It seems that major gaps to Perspectives have been substantively closed. Key future focus areas include use of virtual hierarchies in all reports, adding relative proportional spread as an option, further enhancing flexibility of Quick reports as well as continued tackling of small fixes and enhancement requests

Integration with Cognos Analytics (BI)

Further integrating IBM PA and CA across all platforms and configurations (On-Prem, Cloud Dedicated, Cloud Standard etc) is a top priority. With native integration in place, the ability to cross-leverage the platforms across all deployment options is still a work in progress from both technical as well as licensing perspective. Efforts to provide on-demand integration as well as give individual users access will provide great price flexibility to users with both platforms.

Cloud Capacity Increase

To accommodate larger customers and more complex model in the Cloud, IBM provides additional memory options beyond 512GB with as much as a 2TB capacity limit as part of a standard offering. While we never really saw the current memory limit as a model limitation, with the cloud adoption gaining further momentum this will make migration of large enterprise customers more viable.

Guided Navigation / Modeling

After a long time discussing various approaches to Workflow and collaboration, it seems that IBM is taking a simplified and more intuitive / faster approach to the problem. Based on some of the wireframe pictures and views it seems that this will be a simple and intuitive way to provide a customized work environment with guided navigation. It will provide more of an “Application” look and feel that will offer more configuration options to the end users and will be more intuitive to deploy. That in turn will drive adoption and reduce time to market, especially for smaller players.

 Licensing Options and Overall Value

IBM is introducing a number of new licensing options that will likely make the platform even more accessible to small companies and provide a cost effective option to adopt and grow the application starting with a small user base. In addition to this, IBM introduced a number of competitive pricing offers and sales plays in 2019 that will make the option very attractive.

Topics: IBM Cognos TM1, IBM Planning Analytics

IBM Planning Analytics Version 46 - Key Release Updates

Posted by Rob Harag on Thu, Oct, 10, 2019 @ 07:46 AM

IBM Planning Analytics Version 46 will be released on October 15, 2019. The release includes a number of exciting enhancements to IBM Planning Analytics Workspace as well as fixes and updates to IBM PAX (Excel) and the system overall. Key highlights are the following:

Numbers formatting in IBM PA Workspace

The ability to set format directly from the front-end is being added. This makes it much easier for users to pick the right format in their specific views and reports. A set of pre-defined formats as well custom formatting options are available.

Drill-up on Visualization in IBM PA Workspace

A feature that allows drilling up on a visualization is finally available – until now drill-up was only possible using the “Undo” button, this feature will provide more flexibility in navigating and analyzing individual tables.

Additional technical enhancements to IBM PA Workspace

  • Ability to unload a cube from memory - this allows for temporary reduction of RAM use during times of high usage or issue resolution
  • Setting of thresholds and alerts for individual databases in the Administration page - this simplifies configuration and facilitates unique threshold and alert settings for each database
  • Combining system resource thresholds and alerts in a single configuration page - this simplifies configuration in all environments, both cloud and local, plus provides greater control in monitoring multiple agents in PA Local

 

Click HERE to find more detail and information on the above items as well as all IBM PA Workspace releases by version.

In addition to the above, 15 fixes and defect corrections that are released in IBM PA Version 46, including IBM PAX and general system issues please click HERE

 

Leveraging IBM License Metric to Ensure Licensing Compliance for IBM Planning Analytics

Posted by Jim Wood on Mon, Jul, 15, 2019 @ 10:28 PM

Understanding the usage and utilization of IBM Planning Analytics Licenses can be a challenge for many customers. With the cost of non-compliance being potentially very high, it is important for companies to ensure that they understand and manage the usage of software licenses in their environment.

The IBM Planning Analytics licensing model is not always very straight forward, especially in legacy environments. The original Applix model was based on the number of ports (user able to access the service). After the move to Cognos and subsequently IBM, the model transformed to be a combination of PVU (Processor Value Unit) and end user role based. Understanding the number of PVUs required and clearly identifying usage can be a major challenge.

This problem is magnified with the emergence of cloud computing and increased use of virtualization. While on traditional physical servers the PVU count is set and harder to change, with virtual machines the resources allocated to a server can change with the execution of a few key strokes.

While an audit can easily identify the current number of users and their role within a piece of software, tracking the resources used by a virtual server is an ongoing process. This is where the where the IBM License Metric Tool (ILMT) comes in.

Customer who have been through an audit and have virtual servers are normally asked to install ILMT within 90 days of the end of the audit. Customers are also required to send IBM reports after an agreed time frame.

So what is ILMT? The software itself is a tool that scans targeted servers and identifies what IBM software is installed and for virtual servers it identifies the resources used and compares that to the specified license capacity. This is done using scans. The results are then displayed in reports. Both the scans and the reports can be scheduled. These are the reports sent to IBM.

What Do I need to install ILMT? There are several things needed but a high level you need a separate install location, its own DB and an ILMT license which can be obtained for free form IBM.

Can I get help with ILMT? ACG is able to help you through all of the steps from obtaining a license all the way through to scheduling and distributing the reports to IBM.

Using REST API with IBM Planning Analytics

Posted by Brian Plaia on Tue, Jun, 25, 2019 @ 02:46 PM

What is the TM1 REST API

The TM1 REST API is a relatively unexplored method of interacting with TM1 and allow external applications access to server objects directly.  Introduced in version 10.2 and constantly being improved upon with each Planning Analytics release, the TM1 REST API conforms to the OData V4 standard. This means that the REST API returns standard JSON objects which can be easily manipulated in most programming languages. This is a change from previous proprietary TM1 APIs (such as the VBA /Excel and Java APIs) that required handling of custom TM1 objects.

Currently almost all administrative aspects of the TM1 server, which are available in Architect and even some that are not, such as retrieving thread status and canceling threads (functionality provided by TM1Top) can be handled using the REST API.  Functions and actions are available to create cubes, dimensions, subsets, hierarchies, TI processes and rules as well as monitor transaction logging and even commit TI code to a GIT repository.

How do I Enable the REST API?

First you have to ensure that your TM1 version supports the REST API.  It was introduced in version 10.2, however a number of key features are not available before 10.2.2 FP5, so that is the recommended minimum version.

To be able to access an instance via the REST API, the tm1s.cfg file parameter “HTTPPortNumber” must be set to a unique port.  The instance may need to be recycled prior to use as this is a static parameter.

What Can’t I Do with the REST API?

At its core, the TM1 REST API is not end-user ready.  It is merely a tool for developers to interact with TM1 from external applications/languages and more easily integrate TM1 into your data stream.

It is possible to create end-user consumables (such as dashboards and reports that are referencing / interacting with data stored in TM1 cubes) using the REST API as the connection to TM1, however it should be noted that these solutions will have to be custom-built in another language (such as HTML, PHP, Python, Java, Powershell or a combination of some languages) or built on top of PAW (which in the back-end uses the REST API to connect to TM1).

How is the REST API being used in the real world currently?

The REST API is the back-end connection for both IBM Planning Analytics Workspace (PAW) and IBM Planning Analytics for Excel (PAX).  Any reports/dashboards built in these tools will use the REST API seamlessly by default with no additional configuration/development required.

A majority of the current use case of the REST API in Production-level deployments at a number of clients is to remotely execute TI processes in a multi-threaded fashion.  There is a benefit in using the REST API to kick off parallel TI processes over the old TM1RunTI.exe utility in that the REST API will avoid CAM logon lock contention, as well as avoiding the need to build a custom thread tracking mechanism.  When coupled with the “async” library within Python, the Python script is able to manage thread limits and overall execution status tracking without the need for building flag files into your TI’s.

In addition to using the REST API to kick off TI processes, there are other clients which are using it for minor data sync’s between instances.  For instance, two different TM1 instances which have copies of the same exact cube structure.  Rather than using flat files and custom TI processes to push data between instances, a simple Python script to create and push a cellset from source instance to target instance accomplishes the task without the need for a third-party software to execute the processes on both instances in sequence.

Topics: IBM Cognos TM1, IBM Planning Analytics, Rest API

Adding Single Sign-On to IBM Planning Analytics Local

Posted by Patrick Mauro on Sat, Jun, 01, 2019 @ 09:07 AM

Adding single sign-on capabilities is a great way to get the most out of IBM Planning Analytics Local, whether it’s part of an upgrade, or simply to enhance an existing install – in fact, it is often a business requirement for many clients. Unfortunately, however, the full procedure is not well documented by IBM – while bits and pieces are available, crucial information is often omitted. As a result, many companies struggle to implement the feature properly, if at all.

To help alleviate this problem, we have created a detailed guide on how to properly implement SSO based on our experience, which can be accessed here. This 13-page document includes detailed screenshots, links to any external docs or software needed, code block examples, etc.

What follows is a brief summary of this process – essentially, the configuration consists of three main steps:

  • Install and configure IIS
  • Install and configure Gateway
  • Edit Configurations and Finalize

 Step 1: Install and Configure IIS

Internet Information Services (IIS) is Microsoft’s native web server software for Windows Server, which can be configured to serve a variety of different roles. In this context, it is required in order for the Planning Analytics server software to authenticate users based on Windows account information – so this must be set up first.

Available documentation often neglects to mention that this is a pre-requisite, and does not provide adequate information on what specific role services will be required – our documentation provides all details needed for setup. For additional information about IIS setup, Microsoft’s own IIS documentation may also be of use: https://docs.microsoft.com/en-us/iis/application-frameworks/scenario-build-an-aspnet-website-on-iis/configuring-step-1-install-iis-and-asp-net-modules

 2: Install and Configure Cognos Gateway

Cognos Gateway is an optional feature provided with Cognos Analytics (also known as BI), and is required in order to communicate with the IIS services and facilitate user log-in. However, as of this writing, a “full” install of CA will not include this Gateway by default, and it is not possible to retroactively add this cleanly to an existing CA install. As a result, this will require a very careful separate install of only the Gateway component, but correctly configured to link to the existing CA instance.

However, even after Gateway is installed, the work is not done. Additional files must also be added to Gateway after it is installed: a group of deprecated files included in the web server install in a ZIP file, BI_INTEROP.ZIP. Documentation for CA will mention this is a requirement for Active Directory – this is also required for Gateway, but needs additional modifications that are not detailed in existing docs. All needed information on these modifications is provided in our guide.

3: Modify Configurations and Finalize

As part of the process above, we have essentially replaced the existing access point for CA with a new one. As such, the final step here is the easiest: going into all the existing configuration files, and making sure all the references are accurate and includes the new URIs, to ensure the system is properly communicating between each of the components.

 Once each of the configurations has been adjusted properly, we should be all set: there should no longer be any user login prompt for PAX, Architect, CA, TM1Web or PAW. That said, we include some additional features you might consider adding to enhance this even further in our guide.

You can access the full 13-page document at this link – but if you have any additional questions, feel free to contact ACG at info@acgi.com, or give our offices a call at (973) 898-0012. We’d be happy to help clarify anything we can!

Topics: IBM Cognos TM1, IBM Planning Analytics

Working with the Audit Log and Transaction Log in IBM Planning Analytics (IBM Cognos TM1)

Posted by Andrew Burke on Fri, Apr, 12, 2019 @ 02:59 PM

IBM Planning Analytics maintains a detailed log of all changes in the system caused by both users and processes, including changes in values or core structures. Maintaining a full transparency is critical to facilitate key business and control requirements, including the following:

  • Understanding the drivers of changes from prior versions
  • Ability to roll-back / restore previous values in case of errors or system issues
  • Provide an audit trail for all system changes for control and compliance purposes
  • Obtain better insight into status of deliverables beyond what is available via Workflow
  • Understand the system usage for license compliance and ROI measurement

Two Types of Logging

There are two types of logs in IBM Planning Analytics. Both can be enabled or turned off by the system administrator as required based on existing requirements:

  • Transaction Log - captures all changes to data caused by end users, rules or processes
  • Audit Log - captures changes to metadata, security, logon information and other system activity detail

A Transaction Log represents a simple list of all changes in values as well as the associated context such as cube name, time and date, user name, value pre- and post-change and the value of every dimension of the particular cube at the time of change. A typical Transaction Log looks something like this:

Trans Log

The Audit Log follows a similar concept, however include more verbal / contextual information.

Reporting of data changes

Once captured, all transactions in the log can be brought into the IBM Planning Analytics reporting view and filtered for a specific application (cube), time period, account group etc. to provide transparency of all changes made. Changes to structures and metadata (new accounts or other elements) are tracked in a similar view and can be included for reporting and analysis. The end view could look something like this:

Log report

Some of the other practical benefits of using the Log information include the following, as an example:

Getting Insight into System Usage

He audit log provides an easy view of how the system is used - users can select a particular time period and see which users logged in, when and how often - it will provide good visibility to ensure compliance and efficiency from a licensing perspective but also ensure that the company is getting ROI from the investment and people are using the systems to do their work.

Deliverable Status Check

As a complement to IBM Planning Analytics Workflow - the Audit Log will provide immediate visibility into the status of a particular deliverable. Say the objective is to complete a forecast by Friday 5pm - as of Wednesday that week, the Workflow application shows all submitting Cost Centers as "In Progress" with no further information. The audit log will reveal how many users actually logged in and did work in a particular cost center / application, even though the work is still "In Progress" and not ready for submission. While anecdotal, it provides a good insight to where approximately the organization is in the submission process and if the deadline is likely to be met.

These are some examples of useful application of this not-very-widely-known and understood feature. ACG has a set of tools and accelerators to help translate both the Transaction Log and Audit Log into useful views using IBM Planning Analytics that can be viewed and analyzed. Any thoughts / feedback / examples of use or other suggestions would be welcome.

 

Topics: IBM Cognos TM1, IBM Planning Analytics

Improving IBM Cognos Analytics Report Performance by using Multi-Dimensional Data Sources (IBM Planning Analytics or Cognos OLAP)

Posted by Steve Moore on Sat, Dec, 15, 2018 @ 11:54 AM

As many clients tell us, report performance can be a challenge in IBM Cognos Analytics environments. In most traditional IBM Cognos deployments, reports are developed using a relational data store as its primary source of data. The data is retrieved from the relational table structures using SQL scripting for reporting and analysis. 

In many cases, the data stores are modeled using popular Datamart or Enterprise Data Warehouse (EDW) schemas like Star or Snowflake. This approach provides the flexibility of using SQL to provide data results, but is less than efficient when reports are displaying aggregates.

Deployments that are SQL-heavy need to manage the load of many simultaneous requests to the EDW during the day, as well as managing intra-day refreshes in order to provide near real time results. This may lead to performance issues depending on the size and nature of the reports and volume of data. Since operational reports are typically detailed in nature, they need to be developed in this fashion.

As an alternative, there is an opportunity to better leverage IBM Planning Analytics or other multi-dimensional technology to increase efficiency and improve performance of aggregate-based reports. In OLAP, the data is already aggregated and rolled up to the required summary points.

Typically, requests using MDX against an OLAP data source are significantly less resource intensive than SQL requests against a large database with tables that are joined, for two main reasons:

  1. MDX requests do not need to aggregate detailed rows, since the data is already aggregated
  2. There is no concept of a ‘join’ in dimensional reporting

This approach has many significant benefits:

Improved Performance and Overall User Experience

Performance in generating aggregate-based, dimensionally authored reports, is greatly superior to an equivalent SQL based approach. Dimensionally authored reports against an OLAP source do not have to aggregate the result set. One ACG client reduced the performance of a report from 90 minutes to under 1 minute.

Reduced EDW Load

Reports that are either scheduled or on demand, if successfully converted to dimensional-based reports, take a significant load off the EDW in the number and complexity of report requests. In fact, calculating aggregates is considered one of the most resource intensive types of SQL requests for the EDW. This will also increase the performance of the other relational based on demand reports.

Reduced Disruption to Existing Process and Environment

Converting reports from SQL based relational to dimensional is significantly less disruptive to the users during intra-day refreshes. Many OLAP technologies provide non-disruptive update technology which minimizes or eliminates disruption for the user. For example, dimensional reports developed in Cognos sourced by TM1 cubes is not only real time but minimally disruptive, reducing the need for a separate ‘reporting’ cube. Powerplay cubes have non- disruptive cube update technology, which is completely transparent to the user.

Better and more granular security

Typical multi-dimensional designs have security defined at much more granular level. This provides greater protection of the data than a relational design provides.

In the client example above, there was a set of reports that were pointing to IBM Planning Analytics as a data source but were not written properly and took over 90 minutes to run. After a brief revision and re-write of the core logic (which took less than a few days for that particular report) were able to reduce the run time to just 1 minute. The resulting benefit to the user but also the reduced workload on IBM Planning Analytics represented a huge benefit.

Topics: IBM Planning Analytics, IBM Cognos Analytics

Contact ACG

Tel: (973) 898-0012

Latest Posts

Follow ACG