Cognos Analytics Reporting Update - Version 12.1.1.
IBM released Cognos Analytics Reporting update as part of Version 12.1.1.There were a lot of practical improvements, plus some real movement toward...
4 min read
Will Kennedy Tue, Apr, 07, 2026 @ 03:44 PM
We have created an autonomous data quality agent designed to detect data anomalies, so that your team can spend less time tracking bad data, and more time making strategic decisions. Our custom agent solution combs through your TM1 model to elevate data quality issues for review as an offline task to save time.
Every period, the same input fields, aggregates, and patterns get checked by hand when data ‘doesn’t look right’. Planning models are particularly vulnerable to input errors, where one incorrect value in a driver can propagate silently downstream through allocations and calculations before it is detected. Instead of performing routine analysis, analysts are often left tracking down why top-line dollars are off. But the real cost is not just the time spent checking— it’s the checks that never happen. Under time pressure, analysts may miss issues before reporting deadlines, while silent errors collapse into larger aggregations and get buried in historical data. Now, your forecast and assumptions are off, and you can no longer slice and dice your cubes with confidence that the values are meaningful and correct. It is this persistent issue in the enterprise planning space that our Data Quality Agent seeks to alleviate.
Our Data Quality Agent connects directly to live TM1 data in the same fashion as our Financial Analyst Agent. It iterates over the desired data surface, and documents its findings with no manual intervention. It works across multiple lenses simultaneously: a within-slice pass asks whether related dimension members look right in a single view, then a cross-slice pass (or several) asks whether a given member behaves consistently across the remaining dimensions in the upper context of the view.
The final output is a structured findings report that contains a starting point for investigation on each data quality issue uncovered during discovery.
What the agent flags is as important as what it deliberately ignores. With knowledge of the specific TM1 model we exclude calculated outputs and expected structural differences between members depending on the particular use case.
For example, in a price forecasting model where prices are derived through calculation, decimal precision flags are often not meaningful. A value with 12 decimal places may simply indicate a consolidated-level calculation rather than a true anomaly. In contrast, in models with manual price inputs, identifying this same pattern can be useful: in this case, we will want to know that a price field is a consolidated entry that has been spread. To illustrate this point, here is an example report output generated by our Data Quality Agent against a pharmaceutical reference pricing model:

In this example, our agent examines a full year of data and pulls out 3 products with data quality issues worth investigating further. In its default configuration, the Data Quality Agent flags forecasted prices as suspicions when they carry more than 2 decimal places. After configuration, we can silently suppress those warnings and serve the remainder back to the user.
Nevertheless, delivering findings quickly to the right people is only half the challenge. It is equally important to ensure the findings are reliable enough to act on without manual checks. At this point, the engineering behind the agent becomes as critical as its output. Here, the two approaches to building a Data Quality Agent meaningfully diverge.
In the more conservative pattern, detection logic is written explicitly- missing inputs, ratio outliers, volume gaps, etc. The agent catches what it is built to catch and nothing more. In this case, AI handles one job: turning structured findings into readable business language with an action plan. This approach is the right fit for stable models, known failure modes, and compliance-sensitive environments where predictability matters more than flexibility. This solution architecture allows for deterministic assessments of data quality. For a given slice of data, the same issues will be flagged every time. However, if you adopt this pattern, you will have to write new logic for every new type of anomaly or issue requested. To catch arbitrary anomalies and quality issues, you will need to let the AI Agent observe the data and look for anomalous pattens in some capacity.
In the more open-ended approach, there are no hardcoded thresholds. The model takes in schema context about the TM1 model, reads the data, and identifies potential issues based on our prompts and instructions. In this pattern, we specify how data is expected to appear, generalizing to anomaly types that were not anticipated when the model was built. This flexibility comes with a trade-off: keeping findings in scope and consistent across runs requires significant engineering discipline. It can be the right fit for newer models, evolving data structures, or situations where the goal is discovery rather than confirmation. If you are not exactly sure what anomalies you wish to capture and elevate with AI, this is a pattern worth considering. With the right configuration, we have seen consistent responses on repeated testing despite the non-deterministic nature of this solution architecture. However, any time an LLM is asked to make a judgement, identical outputs cannot be guaranteed without introducing additional, cost-bearing, judging stages.
In some specialized cases it may be beneficial to pursue a hybrid approach where we first execute pure logic for a myriad of defined test cases, then examine what is left over with a well scoped agentic pipeline. In this way, results will be merged: we will have captured 100% of the primary test anomalies and leveraged agentic, open-ended analysis to catch anything outside of that scope.
The agent runs automatically every period without analyst intervention and produces a data quality report. Your team spends time on judgment and escalation rather than cell-by-cell checking or root cause analysis. The agent flags and the analyst decides. That boundary is intentional and should remain intact. The goal is not to remove human judgment from the process. It is to make sure human judgment is applied where it is best served. With the right configuration, this Data Quality Agent will save your team time and energy spent hunting down bad data before it halts your real work.
In case of questions, or if you would like to discuss in more detail, contact us at solutions@acgi.com.
IBM released Cognos Analytics Reporting update as part of Version 12.1.1.There were a lot of practical improvements, plus some real movement toward...
Introduction In today’s data-driven world, organizations often sit on massive volumes of unstructured information in PDFs, logs, text files,...
We have created a Financial Reporting Agent for IBM Planning Analytics. The agent automates periodic financial reporting usingdata in IBM Planning...