ACG Business Analytics Blog

Using REST API with IBM Planning Analytics

Posted by Brian Plaia on Tue, Jun, 25, 2019 @ 02:46 PM

What is the TM1 REST API

The TM1 REST API is a relatively unexplored method of interacting with TM1 and allow external applications access to server objects directly.  Introduced in version 10.2 and constantly being improved upon with each Planning Analytics release, the TM1 REST API conforms to the OData V4 standard. This means that the REST API returns standard JSON objects which can be easily manipulated in most programming languages. This is a change from previous proprietary TM1 APIs (such as the VBA /Excel and Java APIs) that required handling of custom TM1 objects.

Currently almost all administrative aspects of the TM1 server, which are available in Architect and even some that are not, such as retrieving thread status and canceling threads (functionality provided by TM1Top) can be handled using the REST API.  Functions and actions are available to create cubes, dimensions, subsets, hierarchies, TI processes and rules as well as monitor transaction logging and even commit TI code to a GIT repository.

How do I Enable the REST API?

First you have to ensure that your TM1 version supports the REST API.  It was introduced in version 10.2, however a number of key features are not available before 10.2.2 FP5, so that is the recommended minimum version.

To be able to access an instance via the REST API, the tm1s.cfg file parameter “HTTPPortNumber” must be set to a unique port.  The instance may need to be recycled prior to use as this is a static parameter.

What Can’t I Do with the REST API?

At its core, the TM1 REST API is not end-user ready.  It is merely a tool for developers to interact with TM1 from external applications/languages and more easily integrate TM1 into your data stream.

It is possible to create end-user consumables (such as dashboards and reports that are referencing / interacting with data stored in TM1 cubes) using the REST API as the connection to TM1, however it should be noted that these solutions will have to be custom-built in another language (such as HTML, PHP, Python, Java, Powershell or a combination of some languages) or built on top of PAW (which in the back-end uses the REST API to connect to TM1).

How is the REST API being used in the real world currently?

The REST API is the back-end connection for both IBM Planning Analytics Workspace (PAW) and IBM Planning Analytics for Excel (PAX).  Any reports/dashboards built in these tools will use the REST API seamlessly by default with no additional configuration/development required.

A majority of the current use case of the REST API in Production-level deployments at a number of clients is to remotely execute TI processes in a multi-threaded fashion.  There is a benefit in using the REST API to kick off parallel TI processes over the old TM1RunTI.exe utility in that the REST API will avoid CAM logon lock contention, as well as avoiding the need to build a custom thread tracking mechanism.  When coupled with the “async” library within Python, the Python script is able to manage thread limits and overall execution status tracking without the need for building flag files into your TI’s.

In addition to using the REST API to kick off TI processes, there are other clients which are using it for minor data sync’s between instances.  For instance, two different TM1 instances which have copies of the same exact cube structure.  Rather than using flat files and custom TI processes to push data between instances, a simple Python script to create and push a cellset from source instance to target instance accomplishes the task without the need for a third-party software to execute the processes on both instances in sequence.

Topics: IBM Cognos TM1, IBM Planning Analytics, Rest API

Adding Single Sign-On to IBM Planning Analytics Local

Posted by Patrick Mauro on Sat, Jun, 01, 2019 @ 09:07 AM

Adding single sign-on capabilities is a great way to get the most out of IBM Planning Analytics Local, whether it’s part of an upgrade, or simply to enhance an existing install – in fact, it is often a business requirement for many clients. Unfortunately, however, the full procedure is not well documented by IBM – while bits and pieces are available, crucial information is often omitted. As a result, many companies struggle to implement the feature properly, if at all.

To help alleviate this problem, we have created a detailed guide on how to properly implement SSO based on our experience, which can be accessed here. This 13-page document includes detailed screenshots, links to any external docs or software needed, code block examples, etc.

What follows is a brief summary of this process – essentially, the configuration consists of three main steps:

  • Install and configure IIS
  • Install and configure Gateway
  • Edit Configurations and Finalize

 Step 1: Install and Configure IIS

Internet Information Services (IIS) is Microsoft’s native web server software for Windows Server, which can be configured to serve a variety of different roles. In this context, it is required in order for the Planning Analytics server software to authenticate users based on Windows account information – so this must be set up first.

Available documentation often neglects to mention that this is a pre-requisite, and does not provide adequate information on what specific role services will be required – our documentation provides all details needed for setup. For additional information about IIS setup, Microsoft’s own IIS documentation may also be of use:

 2: Install and Configure Cognos Gateway

Cognos Gateway is an optional feature provided with Cognos Analytics (also known as BI), and is required in order to communicate with the IIS services and facilitate user log-in. However, as of this writing, a “full” install of CA will not include this Gateway by default, and it is not possible to retroactively add this cleanly to an existing CA install. As a result, this will require a very careful separate install of only the Gateway component, but correctly configured to link to the existing CA instance.

However, even after Gateway is installed, the work is not done. Additional files must also be added to Gateway after it is installed: a group of deprecated files included in the web server install in a ZIP file, BI_INTEROP.ZIP. Documentation for CA will mention this is a requirement for Active Directory – this is also required for Gateway, but needs additional modifications that are not detailed in existing docs. All needed information on these modifications is provided in our guide.

3: Modify Configurations and Finalize

As part of the process above, we have essentially replaced the existing access point for CA with a new one. As such, the final step here is the easiest: going into all the existing configuration files, and making sure all the references are accurate and includes the new URIs, to ensure the system is properly communicating between each of the components.

 Once each of the configurations has been adjusted properly, we should be all set: there should no longer be any user login prompt for PAX, Architect, CA, TM1Web or PAW. That said, we include some additional features you might consider adding to enhance this even further in our guide.

You can access the full 13-page document at this link – but if you have any additional questions, feel free to contact ACG at, or give our offices a call at (973) 898-0012. We’d be happy to help clarify anything we can!

Topics: IBM Cognos TM1, IBM Planning Analytics

Working with the Audit Log and Transaction Log in IBM Planning Analytics (IBM Cognos TM1)

Posted by Andrew Burke on Fri, Apr, 12, 2019 @ 02:59 PM

IBM Planning Analytics maintains a detailed log of all changes in the system caused by both users and processes, including changes in values or core structures. Maintaining a full transparency is critical to facilitate key business and control requirements, including the following:

  • Understanding the drivers of changes from prior versions
  • Ability to roll-back / restore previous values in case of errors or system issues
  • Provide an audit trail for all system changes for control and compliance purposes
  • Obtain better insight into status of deliverables beyond what is available via Workflow
  • Understand the system usage for license compliance and ROI measurement

Two Types of Logging

There are two types of logs in IBM Planning Analytics. Both can be enabled or turned off by the system administrator as required based on existing requirements:

  • Transaction Log - captures all changes to data caused by end users, rules or processes
  • Audit Log - captures changes to metadata, security, logon information and other system activity detail

A Transaction Log represents a simple list of all changes in values as well as the associated context such as cube name, time and date, user name, value pre- and post-change and the value of every dimension of the particular cube at the time of change. A typical Transaction Log looks something like this:

Trans Log

The Audit Log follows a similar concept, however include more verbal / contextual information.

Reporting of data changes

Once captured, all transactions in the log can be brought into the IBM Planning Analytics reporting view and filtered for a specific application (cube), time period, account group etc. to provide transparency of all changes made. Changes to structures and metadata (new accounts or other elements) are tracked in a similar view and can be included for reporting and analysis. The end view could look something like this:

Log report

Some of the other practical benefits of using the Log information include the following, as an example:

Getting Insight into System Usage

He audit log provides an easy view of how the system is used - users can select a particular time period and see which users logged in, when and how often - it will provide good visibility to ensure compliance and efficiency from a licensing perspective but also ensure that the company is getting ROI from the investment and people are using the systems to do their work.

Deliverable Status Check

As a complement to IBM Planning Analytics Workflow - the Audit Log will provide immediate visibility into the status of a particular deliverable. Say the objective is to complete a forecast by Friday 5pm - as of Wednesday that week, the Workflow application shows all submitting Cost Centers as "In Progress" with no further information. The audit log will reveal how many users actually logged in and did work in a particular cost center / application, even though the work is still "In Progress" and not ready for submission. While anecdotal, it provides a good insight to where approximately the organization is in the submission process and if the deadline is likely to be met.

These are some examples of useful application of this not-very-widely-known and understood feature. ACG has a set of tools and accelerators to help translate both the Transaction Log and Audit Log into useful views using IBM Planning Analytics that can be viewed and analyzed. Any thoughts / feedback / examples of use or other suggestions would be welcome.


Topics: IBM Cognos TM1, IBM Planning Analytics

Working with Virtual Dimensions in IBM Planning Analytics 2.0.

Posted by Theo Chen on Mon, Apr, 23, 2018 @ 02:01 PM

Ability to create virtual dimensions from attribute-based hierarchies is a key update in IBM Planning Analytics 2.0 and a huge source of value. This is a major differentiation for IBM PA compared to other similar platforms. It provides even greater flexibility to the user to analyze data using user-defined parameters on the fly. With virtual hierarchies, companies have much greater flexibility in designing solutions that will provide greater efficiency yet maintain the flexibility to include attributes for thorough analysis.

What are Virtual Dimensions

Virtual dimensions are dimensions that are created in an existing IBM Planning Analytics system on demand by selected end users (system administrators). These dimensions are not part of the core system structure / design, but rather are created ad-hoc as needed based on attributes of individual existing dimension elements. Attributes can be created as and when needed while the system is in use and there are no practical limitations to how many attributes can be created for every single member.

Once a virtual dimension is created, it can be used just like any other dimension. It can be selected for reporting, it can be brought into a cross-tab view for analysis or used as a target for input for (plan, forecast) data.

What Virtual Dimensions are NOT

Virtual dimensions in IBM PA are NOT the same as the ability to create alternate rollups / hierarchies or sorting / filtering by attributes. We find that the concept is a bit hard to understand initially and people default to the capabilities they know. Unlike filtering and alternate hierarchies, virtual dimensions go deeper and provide a more thorough way to analyze the data.


We have a sales reporting application with Customer and Business Unit being the two key dimensions. Other dimensions include Account, Time etc. The Customer dimension is organized by Industry – so Customers rolling up to Sectors rolling up to Industries that roll up to Total Customer.

Let’s say I want to understand my sales by size of company (Enterprise vs Mid-Market vs Small and Medium Enterprise, or SME) in addition to the industry rollup. I do not have “Company Size” defined as a dimension so under previous rules I would have the following options:

  • Redesign the cube to include “Size” as a dimension – depending on the size of the overall model that could be a significant undertaking and could take a lot of time
  • Build an alternate rollup of Customer by size – in this case I would have to choose the Industry or Size rollup for reporting and analysis but I could not use both and would still not get the desired cross-view
  • Embed “Size” inside the existing “Industry” rollup – this would be extremely inefficient as I would have to include multiple rollups within each Industry / Segment for Size and repeat them for each Industry / Segment and deal with conflicting element values – not desirable from maintenance perspective and yet I would still not get the same flexibility for analysis

Enter “Virtual Hierarchies” – with IBM Planning Analytics 2.0 I can simply create a new “Attribute” and label each Customer an Enterprise, Mid-Market firm or SME. Once that is done, I would turn this Attribute into a Virtual Hierarchy and it would show up in my “Set” menu. Once there, I can simply drag the “Size” Hierarchy into the cross-tab view and see the breakdown of customer by size and industry in a simple cross tab view. I am able to drill down or pivot to analyze the data and even input into the virtual intersections to post adjustments or forecast sales. And of course element security still works at the leaf level. Extremely powerful…

Key Benefits

The key benefit implication from this capability include the following:

  • Savings in (RAM) memory due to less number of core dimensions required and thus the size of the core model
  • Better overall performance and usability of the model with less clutter / complexity
  • Much greater flexibility to adjust existing models and get deeper insight
  • Ability to conform with model standards with any customization done locally

What to Look Out For

Some points to be aware of when working with virtual hierarchies include the fact that they can currently only be built with 2 levels – TurboIntegrator scripting is required for deeper hierarchies with more levels. Also, elements in virtual hierarchy are not elements in main itself – each element in virtual has to be unique from every element in dimension. There is no security (yet) for 'virtual' elements, although that should be addressed soon.

The impact of Virtual Hierarchies will be different for existing vs net new applications. For existing models, it will add flexibility through the ability to expand structures without going through a potentially substantial application redesign. For new solutions, it creates an opportunity to design models with more simplicity and rely on Virtual Dimensions to provide scalability and flexibility in the future.

Give us a call to discuss how this great new capability can add value to your platform and review options to provide more insight and analytical power.

Topics: TM1 Technology, IBM Cognos TM1, Performance Management, Financial Planning and Analysis, IBM Planning Analytics

Object Security Integration with IBM Cognos TM1

Posted by James Doren on Sat, Jun, 27, 2015 @ 10:40 AM

One of the key security-related selling points within IBM Cognos TM1 is the tool’s ability to assign different security levels to different objects within the TM1 model for any given user or group. Properly handling object security interaction within a TM1 model can provide an unparalleled level of control over the system, user base, and data. As we have seen at many clients, it will improve the reliability of your data by ensuring that only the users qualified to change data at a given intersection point will be doing so, and allows for unique configurations by personalizing the security rights for each individual group. This added flexibility helps users get the most out of their TM1 model, so understanding exactly how it works is very important.

The security rights can be set up to gain the most control possible over what users can and cannot see, add, delete, or change. There are 4 different levels of object security which pertain to a user’s ability to manipulate data:

  • Cube Security
  • Dimension Security
  • Element Security
  • Cell Level Security

For the examples following, we will assume a single user is assigned to a single group. This way, when we talk about the ‘User’, we understand that this user is part of a TM1 security group. Let’s look at these different levels of object security in the simplest form. Say for example, a user is assigned READ access to a cube called ‘Planning Cube A’. If this is the sole security assignment, then the user will have READ access on all intersections of the cube. If a user is assigned READ access to a dimension called ‘Time’, then the user will have read access to all intersections involving the ‘Time’ dimension. Now let’s say that the only security assigned to a user is an Element level security on the ‘Time’ dimension. For example, the user is assigned READ access for the element ‘2015’ within the ‘Time’ dimension, which gives the user READ access ONLY to the element ‘2015’. The same concept applies to cell-level security.

Simple enough, right? Well, the tricky part lies within the way the security settings for these four different objects interact with each other when multiple levels of object security are assigned. Let us look at the example below:

You have a cube called ‘Planning Cube A’ with the following dimensions:

  1. Version
  2. Territory
  3. Account
  4. Year
  5. Measure

You assign a user READ access to the ‘Planning Cube A’ cube, but you also assign the same user WRITE access to all of the elements in this cube.

What do you think would happen in this example? In this case, the READ access of the cube will override the WRITE access of the elements, allowing the user to view the data within the cube but not allowing them to update any of the data.

Let us look at another example, assuming we are talking about the same cube with the same dimensions as the last example:

You assign a user WRITE access to the ‘Planning Cube A’ cube, WRITE access to all of the elements within all of the dimensions EXCEPT for the ‘Version’ dimension where you assign the user READ access to all of the elements within the ‘Version’ dimension.

In this case, the elements in the ‘Version’ dimension identify every intersection in the cube, and since the user is assigned READ access to all elements in this dimension, the user is unable to update data within the cube.

As stated by IBM, “When groups have security rights for a cube, those rights apply to all dimensions in the cube, unless you further restrict access for specific dimensions or elements”.

Why this type of interaction is useful

This type of object security interaction is essential for having a full level of control over your users and groups. By being able to set security access at different object levels, you are able to pinpoint and assign the security settings for a giver user exactly how you want them to be. Imagine this scenario, referring to the same cube/dimensions as the prior examples:

You wish to have all groups be able to view and read cube data for all territories specified in the ‘Territory’ dimension. You also want each group to be able to update cube data ONLY for their own territory and ONLY for ‘2015’.

By properly manipulating the different levels of object security, and utilizing the way they interact with each other, this can be accomplished quite easily with no coding and no additions to the security model.

To achieve this, we would take the following steps:

  1. Assign each Territory group WRITE access to the ‘Planning Cube A’ cube
  2. Assign each Territory group READ access to all territories (element level security) within the ‘Territory’ dimension that do not apply to that group
    • For example, assign the ‘North’ territory group read access to all elements except for ‘North’ in the Territory dimension
  3. Assign each group READ access to all elements except for the ‘2015’ element in the year dimension (element level security)

With this security scheme set up, users will be able to view all cube data, but will only be able to update cube data related to the territories applicable to them.

While these are just a few examples of how the interaction between object security works, there are many great applications that can come out of fully understanding how the relationships work. In summary, if a cube has READ access specified for a group, then even if the elements or cells are specified as WRITE access, users in the group will not be able to update any cube data. However, if a cube is given WRITE access for a specified group, all of the dimensions by default take the access of the cube security assignments. It is in this way, a sort of top-down approach, that specific intersections and elements can be restricted for a particular group.

Topics: TM1 Technology, IBM Cognos, IBM Cognos TM1

The Future of IBM Cognos TM1

Posted by Jim Wood on Wed, May, 13, 2015 @ 09:46 AM

IBM Cognos TM1 version 10 has now been around for a good while and has already gone through several minor and major updates. The move from version 10.1 to 10.2 itself was a significant change, even though it was within the same release stream. The key updates included the move from .Net to Java as the main platform and support for multi-threading queries.

Despite all these changes and updates, the underlying framework and structures remained the same. Despite the addition of CAFÉ, which delivered significant optimization of performance and flexibility for users over a wide area network, most interfaces have not changed for a number of years.

So the question has to be: What’s next for TM1?

IBM have been aware of some customer frustrations with the TM1 interfaces for quite some time. Performance Modeler and Application Web server were a step in the right direction but they were more aimed at the Cognos Enterprise Planning market for ease of transition. Interfaces such as Architect, which are still heavily relied upon, haven’t seen a major update since the start of version 9.

It seems that IBM plans to address these issues in the next release through a tool called “Prism”, which is currently in Alpha phase. It is not entirely clear and known what the new interface will look like as IBM is playing their cards close to their chest, however rumors have it that it will be a significant upgrade and will close the gap to some of the other analytics and visualization platforms. What has been confirmed thus far is that Prism will replace Architect and Perspectives.

IBM provided an overview of Prism at their IBM Vision 2015 conference in Orlando in May 2015

Is Prism going to be massive leap forward for TM1?

Hopefully yes. Even though only a small amount of information is available, it is clear that IBM is making a heavy push to further enhance the system and is determined to strengthen its position on the market by committing resources. This enhancement to the front-end of the tool would be a welcome and refreshing improvement that will go a long way in streamlining the overall usability. Time will tell if Prism will live up to its promise but the development seems to be moving in the right direction.

What do I need to do to make sure I’m aware of Prism will bring?

With the demonstration at IBM Vision 2015 in May, chances are a lot more information will be available in the near term as to the scope and look / feel of the system. You can always contact your IBM account representative / business partner and ask to be included on communication as new information becomes available. For those that want to be really close to the updates, you can sign up as Beta tester and gain a better understanding of what is coming and if / how it will impact your application and vision for the system.

Topics: TM1 Technology, IBM Cognos, IBM Cognos TM1, Performance Management

Should I be Developing TM1 Applications Using Performance Modeler?

Posted by Jim Wood on Wed, Jun, 04, 2014 @ 12:44 PM

Performance ModelerBefore the advent of IBM Cognos TM1 10 any company or individual looking to complete development within TM1 used the tried and tested but dated and unhelpful development tool sets. When IBM introduced TM1 10 back in 2011, they introduced a whole new, web based development tool set called Performance Modeler. This partnered with the rebranding of what was previously known as Contributor (now known as Application Server) introduced a new development tool partnership that wasn’t just IBM trying to bring new options in to play, it was the introduction of whole new development and deployment methodology. This new methodology is based around using the Application Web portal as the front end for gathering and reporting on planning information with Performance Modeler used to get you there.

With any new methodology there is always a level resistance to it, and this was the case within the current TM1 community. While the old tool set was not the best, it was very well known. We have seen recently however, with IBM pursuing this methodology that more and more people, especially those new to TM1 are following IBM’s new pathway. So with more people starting to use Performance Modeler and with IBM continuing to develop it further, all TM1 users / developers need to start looking at it as a serious project implementation option.

So the question has to be: should you be looking to use Performance Modeler?

Performance Modeler is easy to use.

Performance Modeler has plenty of new wizards built in to make previously difficult tasks a lot easier. There is a built in wizard (called guided import) for building cubes, dimensions and even for importing data. Adding calculations has been made much easier. On top of making calculations easier to build IBM has also added many new functions to Performance Modeler, a perfect example of this is the Lag function that makes building time based balances easier to build and maintain.

Performance Modeler makes building Process Workflow easy.

With its perfect partnership with Application Server, Performance Modeler makes building the parts required for a Workflow implementation easier. Within Performance Modeler you can create the required hierarchy for managing your workflow and it also handles dimensions differently so that any cube built is optimized to work well within the workflow framework.

Performance Modeler offers you a range of new options.

Performance Modeler has added development options that were not possible using the old tool sets. For example removing a dimension from within a cube is now a simple drag and drop task. Previously this would have involved rebuilding the cube.

Performance Modeler is the way forward.

Performance Modeler is a key part of the IBM Cognos TM1 development road map. IBM see this new development methodology as a key part of making TM1 an integral part of their FP&A strategy going forward. As it progresses, more features and improvements will be added.

Now that we’re past the early development stages it’s an ideal time to take a look at performance modeler and see what benefits it can bring to you.

Topics: IBM Cognos TM1, Business Forecasting, Performance Modeler

Integrating Resource Management with Cognos Command Center

Posted by Peter Edwards on Tue, Jan, 28, 2014 @ 04:46 PM

For companies with large data stores, the benefits of performing analytics are impossible to deny: an IBM study revealed that enterprises which conduct analytics have up to 1.6x revenue growth, 2x EBITDA growth and 2.5x stock price appreciation of those that don’t. [1] However, as data volumes continue to escalate, the cost of providing analytics climbs as well. 90% of companies with 1,000 or more employees access several distinct data sources to complete their accounting close, and 43% use four of more. In the finance industry, professionals spend only 22% of their time performing actual analysis – the rest is spent waiting for data or verifying its authenticity. [2] 

Cognos Command Center

Because large datasets lead to longer downtimes for maintenance, most IT departments perform automation to manage systems. These custom built systems tend to be fragile, require updates when source systems are updated or added, and rely on the knowledge of a few key members of the team. Issues that do arise require substantial involvement of IT personnel, and if critical members of the team are absent, they can cause massive delays.

IBM Cognos Command Center is a new product offered by IBM that facilitates database management by consolidating all of your resources into one dashboard. Cognos Command Center is designed to be usable by business professionals without extensive technical knowledge, and provides enterprises with a robust alternative to custom data warehouse management.

Why should you use IBM Cognos Command Center? Here are a few reasons:

Command Center is Intuitive

Cognos Command Center comes packed with plugins so you can snap-in functions that would normally require custom coding and maintenance from IT. Perform file operations, secure data transfers, and native database functions without writing one line of code. Save money on IT resources, and better utilize your existing employees by allowing them to spend less time on routine tasks. Command Center takes the guesswork out of process failures by alerting you immediately by email, and translating commonly encountered errors into plain English. IT can even add procedures to deal with recoverable errors automatically and silently.

Command Center is Secure

Manage your servers from anywhere in the world via the web or your mobile device. Many complex systems involve the use of stored login credentials, but most companies do a poor job securing this data, representing a major security risk from both internal and external sources and accounting for 13% of all breaches according to a Verizon’s 2013 Data Breach Investigations Report. [3] Cognos Command Center encrypts sensitive data, and users can run processes that utilize stored login credentials without ever being able to read them. In the event of failure or intrusion, any change, no matter how small, can be traced to a single user.

Command Center is Flexible

IBM Cognos Command Center uses global variables to aid in synchronization of data between servers. Manage your workflow centrally, and update parameters on every server you own with a single click. Never lose hours of uptime to rolling back data because of a forgotten parameter again! Command Center integrates workflows for heterogeneous environments into one management system, and its plugin-based system means functionality can quickly be added for new storage systems as they are introduced.

Command Center is Supported

IBM Cognos Command Center is the newest member of the IBM Cognos Business Intelligence family. IBM was ranked a clear “leader” in Gartner’s 2013 Business Intelligence & Analytics Magic Quadrant. The Cognos platform’s ability to serve as a complete BI solution has been bolstered by IBM’s recent high profile acquisitions. IBM Cognos received high marks from clients regarding road map and future vision, product quality, and ability to integrate with information infrastructure. [4] The addition of Command Center to the Cognos family represents IBM’s ongoing commitment to deliver the most full-featured business intelligence platform available.

Topics: IBM Cognos TM1, IBM Cognos Command Center, Star Analytics

How Do TM1 Financial Systems Ensure Clean Data?

Posted by Peter Edwards on Wed, Jan, 08, 2014 @ 03:44 PM


Almost half of new business initiatives fail because of poor data, according to a recent article on the EPM Channel website. While manual processes can temporarily fix issues, The IBM TM1 technology uses a series of checks and balances to ensure that businesses can build a database on clean data, saving employees time and avoiding costly mistakes.

How Do TM1 Financial Systems Ensure Clean Data?

Almost half of new business initiatives fail because of poor data, according to a recent article on the EPM Channel website. This trend fuels mistakes in everything from bill payments to shipping, and often means that your best employees spend more time organizing data instead of analyzing it.

ibm_cognos_tm1This is a long-standing pain a lot of businesses incur with dirty data. With Big Data, businesses have large amounts of data, so the quality of that comes into question. Organizations often are looking at ways to improve their data because they have too many manual processes in place or their data is being cleaned in a non-automated fashion. If your data is clean and organized, you’ll streamline processes and align the organization on a set of numbers. As the EPM Channel article points out, it’s best to do this at the start, saving employee resources and avoiding costly problems.

There are systems that can help. TM1 financial solutions enforce what’s called referential integrity in data. Many times, when pulling data to put into TM1, there are problems in supporting systems that must be fixed. For instance, if you’re pulling data from a financial system that’s doing consolidations, there could be numbers at an account level that do not necessarily total to the parent account that they should roll up into. That’s because the system allows users to store two different numbers. This means there are a number of child accounts that fail to add up to the appropriate total because the database doesn’t enforce referential integrity.

Most relational databases or ERP applications don’t necessarily enforce these rules. They try to put business rules in place, but fundamentally the technology allows users to enter data in different places, creating conflicting information. When pulling data into the TM1 financial systems software, the detail-level account data rolls up into the parent account, creating that figure. This method ensures that the parent figure isn’t entered separately and avoids any situations where the child accounts don’t add up to the parent number. Therefore, users who leverage TM1 as there database are much more likely to have quality, clean data.

To learn more about TM1 financial solutions or see a TM1 demo contact ACG.

Contact ACG

Topics: TM1 Technology, IBM Cognos TM1, Business Forecasting, Clean Data, Performance Management

Seven Ways IBM Cognos TM1 Aids Forecasting

Posted by Peter Edwards on Fri, Dec, 06, 2013 @ 04:53 PM


Does your company suffer from common forecasting problems? In a white paper on the subject, the IBM Beyond Budgeting Round Table illustrates how the IBM Cognos TM1 business intelligence solution can help finance managers identify common symptoms and implement solutions that lead to a healthier organization.

7 Ways IBM Cognos Aids Forecasting

Whether companies realize it or not, they likely suffer from at least one symptom of forecasting illness. Misconceptions and antiquated forecasting processes lead to decreased accuracy, quality and profitability for businesses. To successfully treat these forecasting illnesses, finance managers must identify common symptoms and implement solutions that lead to a healthier organization.

In a white paper titled “Seven Symptoms of Forecasting Illness” the IBM Beyond Budgeting Round Table illustrates how the IBM Cognos TM1 business intelligence solution can help identify and cure these pervasive ailments.

1) Semantic confusion: A company might show signs of semantic confusion if the organization finds it difficult to deal with unexpected or undesirable forecasts. This symptom can manifest as a blurred line between the forecast and the company’s goal.

Cure: To address this symptom, companies can use the IBM Cognos TM1 business intelligence solution to allow managers to create, maintain, and reference multiple forecast scenarios easily and efficiently.

2) Visual impairment: A company may suffer from visual impairment if it is obsessed with the year-end forecast numbers, or if it is surprised by new developments at the beginning of the fiscal year.

Cure: Companies should focus on building a comprehensive company forecast strategy. BP is a great example of the power of combined forecasting. BP brought all of the company’s stakeholders together in a large auditorium and efficiently completed a comprehensive company forecast.

In some companies, salespeople are either pushing their numbers to count toward next year’s sales goals or holding their numbers to this year to benefit themselves under their compensation plans. You really need to separate compensation from the forecasting process to take this manipulation out of the process and get truer information.

3) Systemic overload: Systemic overload is characterized by an increasing need to add greater detail and additional analysis in the misguided view that more data is always better in the forecasting process.

Cure: Don’t get stuck in the fine details. With IBM’s TM1 financial planning solution, companies can quickly build scenarios that reflect potential realities while achieving a balance between initial data input and subsequent analysis.

For best practices, you should only be planning the primary financial line items in your business. You should identify 20-50 of these line items that are most important in your business. If you go into too much detail in your budget, you are going to spend all your time configuring the budget and not enough time doing the analysis of it afterward.

4) Prosperity syndrome: If a company’s forecast reflects optimistic growth regardless of negative industry trends or current economic conditions, it may be suffering from prosperity syndrome.

Cure: To alleviate this symptom, companies should consider current market conditions, examine expected competitor strategies and rely on accurate data provided by TM1 financial systems.

5) Lack of coordination: If an organization has multiple sources, or silos, of data, it can’t create reliable, consistent forecasts.

Cure: In order to have a good forecasting process, you also need good data organization in the company. This can be achieved by adopting a single forecasting system throughout the entire organization and feeding data into the forecasting system from other enterprise systems like the General Ledger or the Human Resource Management systems.

6) Asocial behavior: If an organization routinely manipulates or alters its forecasts, even if doing so is not in the best interest of the company, it may suffer from asocial behavior.

Cure: Companies should reward managers and employees based on the value they provide, rather than rewarding “sandbaggers” for abusing the system. This can be achieved by eliminating links between incentive compensation and forecasts.

7) Lack of executive sponsorship: If individual departments are attempting to improve the forecasting process, but others are split over the importance of the task, an organization may suffer from a lack of executive sponsorship.

Cure: In order for a company to be successful in their financial planning and reporting efforts, it has to come from the top down. These processes will touch multiple parts of the organization and must be designated a key strategic initiative of the entire company in order for it to be successful.

All companies suffer from at least one type of forecasting illness. These factors are a reminder that every business needs to constantly examine its core process and culture to ensure they’re using the most accurate forecasting process and tools. By implementing a data analysis tool like the IBM Cognos TM1 business intelligence solution, organizations can achieve a high level of efficiency and promote healthier forecasting.

Free Whitepaper: 7 Symptoms of  Forecasting Illness

Topics: Data Analysis Tool, Business Intelligence, IBM Cognos TM1

Contact ACG

Tel: (973) 898-0012

Latest Posts

Follow ACG