Description: Flexera One - IT Asset management - NAM - batch processing tasks delays
Timeframe: November 28, 2025, 9:31 PM PST to December 1, 2025, 9:12 PM PST
Incident Summary
On Friday, November 28, 2025, at 9:31 PM PST, our teams identified an issue affecting batch processing tasks for IT Asset Management (ITAM) in the NAM region, impacting a subset of our customers. Although the Flexera One platform remained accessible, the affected customers experienced delays and failures in batch processing tasks, along with related error messages.
The disruption primarily impacted batch processing tasks, and license reconciliation workflows, resulting in tasks being stuck in an “In Process” state for an extended period.
Upon investigation, our teams determined that the root cause was database storage exhaustion in the NA production environment. While clearing stuck jobs provided temporary relief, newly submitted jobs continued to face the same database constraints and failure patterns. To address this, we expanded the database storage capacity and implemented additional measures to clear the backlog and expedite inventory processing.
By December 1, 2025, at 9:12 PM PST, all services had returned to normal. The technical teams confirmed that job execution was proceeding without issues, and no new stalls were observed. The incident was officially declared resolved at that time.
Root Cause
The investigation revealed that the issue stemmed from database storage exhaustion in the North America production environment. A critical internal function, intended to eliminate obsolete data, encountered a failure during execution due to a software bug. Consequently, database operations began to experience primary key conflicts and deadlock situations during inventory and reconciliation processing. These conflicts resulted in tasks repeatedly retrying without making progress, particularly when processing reached the license reconciliation stage.
Remediation Actions
· Database Storage Expansion- Expanded database storage capacity in the NA production environment to relieve immediate storage constraints.
· Batch Processor Workload Rebalancing- Paused processing on older batch processor instances to allow stalled tasks to clear from the system.
· Deployment of New Batch Processor Instances- New batch processor instances were started to take over workload processing.
· Active Queue and Processing Monitoring- Actively monitored job queues and processing behavior to confirm forward progress.
· Bug tracking- Logged the cleanup-function bug for further investigation and fix development.
Future Preventative Measures
· Stabilization of Data Cleanup Mechanisms - A permanent fix will be implemented for the identified bug to ensure it operates reliably under high system load or low database storage conditions.
· Database Monitoring review - Enhanced monitoring and alerting will be reviewed and implemented for database storage utilization, with clearly defined thresholds and escalation paths. This will enable earlier detection of capacity risks and allow corrective action to be taken before storage exhaustion impacts production workloads.