ACG Business Analytics Blog

IBM Planning Analytics Version 46 - Key Release Updates

Posted by Rob Harag on Thu, Oct, 10, 2019 @ 07:46 AM

IBM Planning Analytics Version 46 will be released on October 15, 2019. The release includes a number of exciting enhancements to IBM Planning Analytics Workspace as well as fixes and updates to IBM PAX (Excel) and the system overall. Key highlights are the following:

Numbers formatting in IBM PA Workspace

The ability to set format directly from the front-end is being added. This makes it much easier for users to pick the right format in their specific views and reports. A set of pre-defined formats as well custom formatting options are available.

Drill-up on Visualization in IBM PA Workspace

A feature that allows drilling up on a visualization is finally available – until now drill-up was only possible using the “Undo” button, this feature will provide more flexibility in navigating and analyzing individual tables.

Additional technical enhancements to IBM PA Workspace

  • Ability to unload a cube from memory - this allows for temporary reduction of RAM use during times of high usage or issue resolution
  • Setting of thresholds and alerts for individual databases in the Administration page - this simplifies configuration and facilitates unique threshold and alert settings for each database
  • Combining system resource thresholds and alerts in a single configuration page - this simplifies configuration in all environments, both cloud and local, plus provides greater control in monitoring multiple agents in PA Local

 

Click HERE to find more detail and information on the above items as well as all IBM PA Workspace releases by version.

In addition to the above, 15 fixes and defect corrections that are released in IBM PA Version 46, including IBM PAX and general system issues please click HERE

 

Leveraging IBM License Metric to Ensure Licensing Compliance for IBM Planning Analytics

Posted by Jim Wood on Mon, Jul, 15, 2019 @ 10:28 PM

Understanding the usage and utilization of IBM Planning Analytics Licenses can be a challenge for many customers. With the cost of non-compliance being potentially very high, it is important for companies to ensure that they understand and manage the usage of software licenses in their environment.

The IBM Planning Analytics licensing model is not always very straight forward, especially in legacy environments. The original Applix model was based on the number of ports (user able to access the service). After the move to Cognos and subsequently IBM, the model transformed to be a combination of PVU (Processor Value Unit) and end user role based. Understanding the number of PVUs required and clearly identifying usage can be a major challenge.

This problem is magnified with the emergence of cloud computing and increased use of virtualization. While on traditional physical servers the PVU count is set and harder to change, with virtual machines the resources allocated to a server can change with the execution of a few key strokes.

While an audit can easily identify the current number of users and their role within a piece of software, tracking the resources used by a virtual server is an ongoing process. This is where the where the IBM License Metric Tool (ILMT) comes in.

Customer who have been through an audit and have virtual servers are normally asked to install ILMT within 90 days of the end of the audit. Customers are also required to send IBM reports after an agreed time frame.

So what is ILMT? The software itself is a tool that scans targeted servers and identifies what IBM software is installed and for virtual servers it identifies the resources used and compares that to the specified license capacity. This is done using scans. The results are then displayed in reports. Both the scans and the reports can be scheduled. These are the reports sent to IBM.

What Do I need to install ILMT? There are several things needed but a high level you need a separate install location, its own DB and an ILMT license which can be obtained for free form IBM.

Can I get help with ILMT? ACG is able to help you through all of the steps from obtaining a license all the way through to scheduling and distributing the reports to IBM.

Using REST API with IBM Planning Analytics

Posted by Brian Plaia on Tue, Jun, 25, 2019 @ 02:46 PM

What is the TM1 REST API

The TM1 REST API is a relatively unexplored method of interacting with TM1 and allow external applications access to server objects directly.  Introduced in version 10.2 and constantly being improved upon with each Planning Analytics release, the TM1 REST API conforms to the OData V4 standard. This means that the REST API returns standard JSON objects which can be easily manipulated in most programming languages. This is a change from previous proprietary TM1 APIs (such as the VBA /Excel and Java APIs) that required handling of custom TM1 objects.

Currently almost all administrative aspects of the TM1 server, which are available in Architect and even some that are not, such as retrieving thread status and canceling threads (functionality provided by TM1Top) can be handled using the REST API.  Functions and actions are available to create cubes, dimensions, subsets, hierarchies, TI processes and rules as well as monitor transaction logging and even commit TI code to a GIT repository.

How do I Enable the REST API?

First you have to ensure that your TM1 version supports the REST API.  It was introduced in version 10.2, however a number of key features are not available before 10.2.2 FP5, so that is the recommended minimum version.

To be able to access an instance via the REST API, the tm1s.cfg file parameter “HTTPPortNumber” must be set to a unique port.  The instance may need to be recycled prior to use as this is a static parameter.

What Can’t I Do with the REST API?

At its core, the TM1 REST API is not end-user ready.  It is merely a tool for developers to interact with TM1 from external applications/languages and more easily integrate TM1 into your data stream.

It is possible to create end-user consumables (such as dashboards and reports that are referencing / interacting with data stored in TM1 cubes) using the REST API as the connection to TM1, however it should be noted that these solutions will have to be custom-built in another language (such as HTML, PHP, Python, Java, Powershell or a combination of some languages) or built on top of PAW (which in the back-end uses the REST API to connect to TM1).

How is the REST API being used in the real world currently?

The REST API is the back-end connection for both IBM Planning Analytics Workspace (PAW) and IBM Planning Analytics for Excel (PAX).  Any reports/dashboards built in these tools will use the REST API seamlessly by default with no additional configuration/development required.

A majority of the current use case of the REST API in Production-level deployments at a number of clients is to remotely execute TI processes in a multi-threaded fashion.  There is a benefit in using the REST API to kick off parallel TI processes over the old TM1RunTI.exe utility in that the REST API will avoid CAM logon lock contention, as well as avoiding the need to build a custom thread tracking mechanism.  When coupled with the “async” library within Python, the Python script is able to manage thread limits and overall execution status tracking without the need for building flag files into your TI’s.

In addition to using the REST API to kick off TI processes, there are other clients which are using it for minor data sync’s between instances.  For instance, two different TM1 instances which have copies of the same exact cube structure.  Rather than using flat files and custom TI processes to push data between instances, a simple Python script to create and push a cellset from source instance to target instance accomplishes the task without the need for a third-party software to execute the processes on both instances in sequence.

Topics: IBM Cognos TM1, IBM Planning Analytics, Rest API

Adding Single Sign-On to IBM Planning Analytics Local

Posted by Patrick Mauro on Sat, Jun, 01, 2019 @ 09:07 AM

Adding single sign-on capabilities is a great way to get the most out of IBM Planning Analytics Local, whether it’s part of an upgrade, or simply to enhance an existing install – in fact, it is often a business requirement for many clients. Unfortunately, however, the full procedure is not well documented by IBM – while bits and pieces are available, crucial information is often omitted. As a result, many companies struggle to implement the feature properly, if at all.

To help alleviate this problem, we have created a detailed guide on how to properly implement SSO based on our experience, which can be accessed here. This 13-page document includes detailed screenshots, links to any external docs or software needed, code block examples, etc.

What follows is a brief summary of this process – essentially, the configuration consists of three main steps:

  • Install and configure IIS
  • Install and configure Gateway
  • Edit Configurations and Finalize

 Step 1: Install and Configure IIS

Internet Information Services (IIS) is Microsoft’s native web server software for Windows Server, which can be configured to serve a variety of different roles. In this context, it is required in order for the Planning Analytics server software to authenticate users based on Windows account information – so this must be set up first.

Available documentation often neglects to mention that this is a pre-requisite, and does not provide adequate information on what specific role services will be required – our documentation provides all details needed for setup. For additional information about IIS setup, Microsoft’s own IIS documentation may also be of use: https://docs.microsoft.com/en-us/iis/application-frameworks/scenario-build-an-aspnet-website-on-iis/configuring-step-1-install-iis-and-asp-net-modules

 2: Install and Configure Cognos Gateway

Cognos Gateway is an optional feature provided with Cognos Analytics (also known as BI), and is required in order to communicate with the IIS services and facilitate user log-in. However, as of this writing, a “full” install of CA will not include this Gateway by default, and it is not possible to retroactively add this cleanly to an existing CA install. As a result, this will require a very careful separate install of only the Gateway component, but correctly configured to link to the existing CA instance.

However, even after Gateway is installed, the work is not done. Additional files must also be added to Gateway after it is installed: a group of deprecated files included in the web server install in a ZIP file, BI_INTEROP.ZIP. Documentation for CA will mention this is a requirement for Active Directory – this is also required for Gateway, but needs additional modifications that are not detailed in existing docs. All needed information on these modifications is provided in our guide.

3: Modify Configurations and Finalize

As part of the process above, we have essentially replaced the existing access point for CA with a new one. As such, the final step here is the easiest: going into all the existing configuration files, and making sure all the references are accurate and includes the new URIs, to ensure the system is properly communicating between each of the components.

 Once each of the configurations has been adjusted properly, we should be all set: there should no longer be any user login prompt for PAX, Architect, CA, TM1Web or PAW. That said, we include some additional features you might consider adding to enhance this even further in our guide.

You can access the full 13-page document at this link – but if you have any additional questions, feel free to contact ACG at info@acgi.com, or give our offices a call at (973) 898-0012. We’d be happy to help clarify anything we can!

Topics: IBM Cognos TM1, IBM Planning Analytics

Working with the Audit Log and Transaction Log in IBM Planning Analytics (IBM Cognos TM1)

Posted by Andrew Burke on Fri, Apr, 12, 2019 @ 02:59 PM

IBM Planning Analytics maintains a detailed log of all changes in the system caused by both users and processes, including changes in values or core structures. Maintaining a full transparency is critical to facilitate key business and control requirements, including the following:

  • Understanding the drivers of changes from prior versions
  • Ability to roll-back / restore previous values in case of errors or system issues
  • Provide an audit trail for all system changes for control and compliance purposes
  • Obtain better insight into status of deliverables beyond what is available via Workflow
  • Understand the system usage for license compliance and ROI measurement

Two Types of Logging

There are two types of logs in IBM Planning Analytics. Both can be enabled or turned off by the system administrator as required based on existing requirements:

  • Transaction Log - captures all changes to data caused by end users, rules or processes
  • Audit Log - captures changes to metadata, security, logon information and other system activity detail

A Transaction Log represents a simple list of all changes in values as well as the associated context such as cube name, time and date, user name, value pre- and post-change and the value of every dimension of the particular cube at the time of change. A typical Transaction Log looks something like this:

Trans Log

The Audit Log follows a similar concept, however include more verbal / contextual information.

Reporting of data changes

Once captured, all transactions in the log can be brought into the IBM Planning Analytics reporting view and filtered for a specific application (cube), time period, account group etc. to provide transparency of all changes made. Changes to structures and metadata (new accounts or other elements) are tracked in a similar view and can be included for reporting and analysis. The end view could look something like this:

Log report

Some of the other practical benefits of using the Log information include the following, as an example:

Getting Insight into System Usage

He audit log provides an easy view of how the system is used - users can select a particular time period and see which users logged in, when and how often - it will provide good visibility to ensure compliance and efficiency from a licensing perspective but also ensure that the company is getting ROI from the investment and people are using the systems to do their work.

Deliverable Status Check

As a complement to IBM Planning Analytics Workflow - the Audit Log will provide immediate visibility into the status of a particular deliverable. Say the objective is to complete a forecast by Friday 5pm - as of Wednesday that week, the Workflow application shows all submitting Cost Centers as "In Progress" with no further information. The audit log will reveal how many users actually logged in and did work in a particular cost center / application, even though the work is still "In Progress" and not ready for submission. While anecdotal, it provides a good insight to where approximately the organization is in the submission process and if the deadline is likely to be met.

These are some examples of useful application of this not-very-widely-known and understood feature. ACG has a set of tools and accelerators to help translate both the Transaction Log and Audit Log into useful views using IBM Planning Analytics that can be viewed and analyzed. Any thoughts / feedback / examples of use or other suggestions would be welcome.

 

Topics: IBM Cognos TM1, IBM Planning Analytics

Improving IBM Cognos Analytics Report Performance by using Multi-Dimensional Data Sources (IBM Planning Analytics or Cognos OLAP)

Posted by Steve Moore on Sat, Dec, 15, 2018 @ 11:54 AM

As many clients tell us, report performance can be a challenge in IBM Cognos Analytics environments. In most traditional IBM Cognos deployments, reports are developed using a relational data store as its primary source of data. The data is retrieved from the relational table structures using SQL scripting for reporting and analysis. 

In many cases, the data stores are modeled using popular Datamart or Enterprise Data Warehouse (EDW) schemas like Star or Snowflake. This approach provides the flexibility of using SQL to provide data results, but is less than efficient when reports are displaying aggregates.

Deployments that are SQL-heavy need to manage the load of many simultaneous requests to the EDW during the day, as well as managing intra-day refreshes in order to provide near real time results. This may lead to performance issues depending on the size and nature of the reports and volume of data. Since operational reports are typically detailed in nature, they need to be developed in this fashion.

As an alternative, there is an opportunity to better leverage IBM Planning Analytics or other multi-dimensional technology to increase efficiency and improve performance of aggregate-based reports. In OLAP, the data is already aggregated and rolled up to the required summary points.

Typically, requests using MDX against an OLAP data source are significantly less resource intensive than SQL requests against a large database with tables that are joined, for two main reasons:

  1. MDX requests do not need to aggregate detailed rows, since the data is already aggregated
  2. There is no concept of a ‘join’ in dimensional reporting

This approach has many significant benefits:

Improved Performance and Overall User Experience

Performance in generating aggregate-based, dimensionally authored reports, is greatly superior to an equivalent SQL based approach. Dimensionally authored reports against an OLAP source do not have to aggregate the result set. One ACG client reduced the performance of a report from 90 minutes to under 1 minute.

Reduced EDW Load

Reports that are either scheduled or on demand, if successfully converted to dimensional-based reports, take a significant load off the EDW in the number and complexity of report requests. In fact, calculating aggregates is considered one of the most resource intensive types of SQL requests for the EDW. This will also increase the performance of the other relational based on demand reports.

Reduced Disruption to Existing Process and Environment

Converting reports from SQL based relational to dimensional is significantly less disruptive to the users during intra-day refreshes. Many OLAP technologies provide non-disruptive update technology which minimizes or eliminates disruption for the user. For example, dimensional reports developed in Cognos sourced by TM1 cubes is not only real time but minimally disruptive, reducing the need for a separate ‘reporting’ cube. Powerplay cubes have non- disruptive cube update technology, which is completely transparent to the user.

Better and more granular security

Typical multi-dimensional designs have security defined at much more granular level. This provides greater protection of the data than a relational design provides.

In the client example above, there was a set of reports that were pointing to IBM Planning Analytics as a data source but were not written properly and took over 90 minutes to run. After a brief revision and re-write of the core logic (which took less than a few days for that particular report) were able to reduce the run time to just 1 minute. The resulting benefit to the user but also the reduced workload on IBM Planning Analytics represented a huge benefit.

Topics: IBM Planning Analytics, IBM Cognos Analytics

Forrester Names IBM a “Leader,” Exciting Updates to Watson Portfolio

Posted by Jessica Lee on Thu, Sep, 20, 2018 @ 10:59 AM

The future looks to data science to handle the ubiquity and increasing complexity of one of our most valuable resources—data. Among the essentials to succeed in the data science space are strong Predictive Analytics and Machine Learning (PAML) tools.

Earlier this month, Forrester published an evaluation of the 13 most notable PAML solutions in order to inform the choices of enterprise leaders. IBM was named a “leader” and Forrester described Watson Studio Cloud as a “perfectly balanced PAML solution for enterprise data science teams.” Needless to say, it has been an exciting month for IBM, as there have also been announcements of new capabilities and updates coming to various members of its data science portfolio.

At a glance, we will be looking at:

  • The Future of Multimodal PAML
  • Watson Studio as a Leader
  • What’s Next from IBM

The Future of Multimodal PAML

PAML solutions have always varied in that some are more code-based while others are more a visual interface. A fair divergence, one might think, in order to accommodate data scientists with different modeling backgrounds and preferences. Like any team, however, a team of data scientists is stronger with collaboration, and as the need for collaboration grows, so too does the importance of multimodal PAML solutions. These solutions provide access to both code-based and visual tools for different members of the team, and they have had an understandable appeal to enterprises for this reason. However, according to Forrester, they will have to do more. In the next two years, enterprise data science teams must support the thousand-model vision, referring to the countless “applications and business processes that could, but do not currently, benefit from predictive models.” Multimodal PAML solutions will have to adequately support the greater needs of the data science life cycle, like model automation and access to open source innovation.

Watson Studio as a Leader

IBM provides a multimodal PAML solution built to survive—and thrive— in this thousand-model future. Watson Studio not only addresses the needs of a team’s data scientists and data science cycle, but it is also, as Forrester noted, “designed for all collaborators—business stakeholders, data engineers, data scientists, and app developers.” As data circulates between stages of the data science cycle, involving different tools and different members of the team, it can be handled entirely within Watson Studio’s multifaceted platform.

A few examples:

  • Data scientists have open source coding tools like R, Python, and Jupyter notebooks, along with visual model-building tools like SPSS Modeler, all embedded into a single platform.
  • Application developers have model management and deployment tools, with different automation and deployment options, readily available to them.
  • Analysts can create visualizations of trends and anomalies in data using interactive dashboards for communication to management and business stakeholders during meetings.
  • Project Managers can see what team members are working on for various projects, and which assets are being used for those projects.

Watson Studio’s powerful engineering is complemented by an ergonomic design that makes for intuitive, effective use by all team members.

What’s Next from IBM

There are plenty of exciting updates to look forward to in the future of IBM’s data science offerings, one of which was announced last week at IBM’s signature “Winning with AI” event in New York City. Effective October 2nd, the on-premise offering of Data Science Experience (DSX) Local will be rebranded as “Watson Studio Local.” This rebranding not only more simply unifies all of IBM’s data science offerings under the “IBM Watson” name, but it is part of a greater movement to bring the same great capabilities to customers regardless of whether their platform is local or on the cloud. As part of this, we will see the availability of an integrated SPSS Modeler add-on in the new Watson Studio Local. This tool has been extremely popular for visual-based modeling on the cloud platform, so it will be exciting to see it as an option for the on-premise platform.

We can also expect to see the benefits of Watson Studio’s new partnerships with DefinedCrowd and Figure Eight. As leaders in the data annotation space, these partners have extensive experience with various points in the data science cycle, including data processing and machine learning model training. These partnerships are a part of IBM’s continued collaboration with others in the data science community to make Watson Studio a robust, well-rounded solution.

Already a leader in the industry, IBM is taking no breaks from bringing more powerful capabilities and solutions to its portfolio. If you are interested in learning more about Watson Studio and the benefits that it can bring to your company, please send an email to jlee@acgi.com.

Source:

Forrester Research Inc. The Forrester Wave™: Multimodal Predictive Analytics and Machine Learning Solutions, Q3 2018 by Mike Gualtieri and Kjell Carlsson, Ph.D., September 5, 2018

Topics: IBM Watson Studio

Working with Virtual Dimensions in IBM Planning Analytics 2.0.

Posted by Theo Chen on Mon, Apr, 23, 2018 @ 02:01 PM

Ability to create virtual dimensions from attribute-based hierarchies is a key update in IBM Planning Analytics 2.0 and a huge source of value. This is a major differentiation for IBM PA compared to other similar platforms. It provides even greater flexibility to the user to analyze data using user-defined parameters on the fly. With virtual hierarchies, companies have much greater flexibility in designing solutions that will provide greater efficiency yet maintain the flexibility to include attributes for thorough analysis.

What are Virtual Dimensions

Virtual dimensions are dimensions that are created in an existing IBM Planning Analytics system on demand by selected end users (system administrators). These dimensions are not part of the core system structure / design, but rather are created ad-hoc as needed based on attributes of individual existing dimension elements. Attributes can be created as and when needed while the system is in use and there are no practical limitations to how many attributes can be created for every single member.

Once a virtual dimension is created, it can be used just like any other dimension. It can be selected for reporting, it can be brought into a cross-tab view for analysis or used as a target for input for (plan, forecast) data.

What Virtual Dimensions are NOT

Virtual dimensions in IBM PA are NOT the same as the ability to create alternate rollups / hierarchies or sorting / filtering by attributes. We find that the concept is a bit hard to understand initially and people default to the capabilities they know. Unlike filtering and alternate hierarchies, virtual dimensions go deeper and provide a more thorough way to analyze the data.

Example:

We have a sales reporting application with Customer and Business Unit being the two key dimensions. Other dimensions include Account, Time etc. The Customer dimension is organized by Industry – so Customers rolling up to Sectors rolling up to Industries that roll up to Total Customer.

Let’s say I want to understand my sales by size of company (Enterprise vs Mid-Market vs Small and Medium Enterprise, or SME) in addition to the industry rollup. I do not have “Company Size” defined as a dimension so under previous rules I would have the following options:

  • Redesign the cube to include “Size” as a dimension – depending on the size of the overall model that could be a significant undertaking and could take a lot of time
  • Build an alternate rollup of Customer by size – in this case I would have to choose the Industry or Size rollup for reporting and analysis but I could not use both and would still not get the desired cross-view
  • Embed “Size” inside the existing “Industry” rollup – this would be extremely inefficient as I would have to include multiple rollups within each Industry / Segment for Size and repeat them for each Industry / Segment and deal with conflicting element values – not desirable from maintenance perspective and yet I would still not get the same flexibility for analysis

Enter “Virtual Hierarchies” – with IBM Planning Analytics 2.0 I can simply create a new “Attribute” and label each Customer an Enterprise, Mid-Market firm or SME. Once that is done, I would turn this Attribute into a Virtual Hierarchy and it would show up in my “Set” menu. Once there, I can simply drag the “Size” Hierarchy into the cross-tab view and see the breakdown of customer by size and industry in a simple cross tab view. I am able to drill down or pivot to analyze the data and even input into the virtual intersections to post adjustments or forecast sales. And of course element security still works at the leaf level. Extremely powerful…

Key Benefits

The key benefit implication from this capability include the following:

  • Savings in (RAM) memory due to less number of core dimensions required and thus the size of the core model
  • Better overall performance and usability of the model with less clutter / complexity
  • Much greater flexibility to adjust existing models and get deeper insight
  • Ability to conform with model standards with any customization done locally

What to Look Out For

Some points to be aware of when working with virtual hierarchies include the fact that they can currently only be built with 2 levels – TurboIntegrator scripting is required for deeper hierarchies with more levels. Also, elements in virtual hierarchy are not elements in main itself – each element in virtual has to be unique from every element in dimension. There is no security (yet) for 'virtual' elements, although that should be addressed soon.

The impact of Virtual Hierarchies will be different for existing vs net new applications. For existing models, it will add flexibility through the ability to expand structures without going through a potentially substantial application redesign. For new solutions, it creates an opportunity to design models with more simplicity and rely on Virtual Dimensions to provide scalability and flexibility in the future.

Give us a call to discuss how this great new capability can add value to your platform and review options to provide more insight and analytical power.

Topics: TM1 Technology, IBM Cognos TM1, Performance Management, Financial Planning and Analysis, IBM Planning Analytics

IBM Cognos Controller 10.3.1. Released

Posted by Rob Harag on Sun, Dec, 03, 2017 @ 10:35 AM

The latest version of IBM Cognos Controller is now available and features integration with IBM Planning Analytics Workspace. This provide tighter integration and opens up possibilities for better insight, analytics and provides more power to the hands of users.

IBM Planning Analytics is a powerful interface that facilitates reporting, visualization, analytics and planning / forecasting. With a better integration to Controller, it can be further used as a common interface for both the Record-to-Report as well as Report-Analyze-Model-Plan cycles.

A high level overview of the integration capabilities can be viewed at the link below

https://ibm.box.com/s/nscuagvqlfo37m5sn3ckm7hb6lo2y75f)

IBM Cognos Controller is a robust application to support Financial Consolidation and Close. It is a finance-owned solution that is managed and maintained by Finance and offers out-of-the-box consolidation capabilities without need to do any programming or scripting. With its pre-built capabilities to manage the close process including workflow, review and reporting, it is a system that vastly simplifies and streamlines the month end process and provides a robust foundation that improves control and compliance.

IBM Cognos Controller is available with equal functionality for on-premise as well as cloud based deployment. It offers over 260 pre-built and packaged reports to facilitate ease of adoption and maximize ROI.

Some additional capabilities off the Controller 10.3.1. Release include the following:

  • Improved data import in Controller Web
  • More efficient remediation of import errors in real time during the import
  • Drill from data entry forms to the source level detail in the import file
  • A number of other popular customer enhancement requests
  • Limited use license for PA Local & WORKSPACE
  • Support for click to run

Publicly accessible knowledge center site is https://www.ibm.com/support/knowledgecenter/SS9S6B_10.3.1

Topics: IBM Cognos Controller, FInancial Consolidation and Close

IMPORTANT: Your TM1 System May Stop Working on Nov 25, 2016

Posted by Kevin Nichols on Sun, Sep, 11, 2016 @ 09:43 AM

Important Notice for All IBM Cognos TM1 Users:

There is a possibility that your TM1 system may stop working on November 24, 2016.

The existing TM1 security certificates that are used in the Secure Sockets Layer (SSL) communications that occur between TM1 clients and servers are scheduled to expire on November 24, 2016. If your system is affected, your users will not be able to connect to your TM1 Server and use TM1 Web, TM1 Perspectives etc on November 25, 2016 unless you take appropriate action.

Is Your System Impacted?

If you are running TM1 version 10.X your system is most likely impacted as in versions 10.X the SSL settings are turned to ON by default. If you are running an older version of TM1 we recommend that a system review is performed to ensure continuity of service.

What should you do

IBM has created a new certificate that will replace the soon to be expired one and will provide even stronger encryption (2048 bit vs. 1024 bit). This new certificate has an expiration date of August 25, 2022 and needs to be installed to replace the existing one. The details for converting your TM1 environment to the new SSL certificate can be found in the following IBM Tech Note:

http://www-01.ibm.com/support/docview.wss?uid=swg21697266

Please contact ACG at support@acgi.com if you have questions, concerns or would like to discuss. We are happy to help review your TM1 system, confirm all updates are made properly and ensure TM1 system availability.

Contact ACG

Tel: (973) 898-0012

Latest Posts

Follow ACG