ONLINE RELIABILITY ESTIMATION OF SOURCES IN STREAMING ANALYSIS OF MULTIMODAL TIME SERIES WITH ISOTONIC REGRESSION CALIBRATION

Authors

DOI:

https://doi.org/10.32782/2786-9024/v4i6(38).359293

Keywords:

machine learning, data analysis, information systems, decision support systems, multimodal time series, online calibration, isotonic regression, degradation detection, reliability estimation, streaming data.

Abstract

Streaming intelligent decision support systems processing multimodal time series operate under causality constraints and must satisfy requirements of low latency, bounded computational budgets, and controllable responses to environmental change. A critical practical risk in such pipelines is the temporary degradation of individual sources (missing values, elevated noise, scale shifts), which can masquerade as concept drift and trigger unstable or excessive control actions. This paper considers online estimation of source reliability as a causal probabilistic assessment of being in a non- degraded state and shows that practical control requires a calibrated scale: the output value must be interpretable as the frequency of the “non-degraded” regime under relevant conditions. The proposed approach combines lightweight degradation proxy signals suitable for online computation with isotonic regression calibration, which provides a monotone mapping from scores to correct probabilities. Key experimental results demonstrate ROC- AUC of 0 86 0 07 . ±. for the calibrated variant and calibration improvement from ECE of 0 18 0 07 . ±. (uncalibrated) to ECE of 0 08 0 04 . ±. (calibrated) at acceptable time costs: simple proxy scales have microsecond latencies, while the full online model maintains mean latency of approximately 150µs , meeting the needs of streaming pipelines.

References

T. Baltrušaitis, C. Ahuja, L.-P. Morency, “Multimodal machine learning: A survey and taxonomy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 2, pp. 423–443, 2019. DOI: 10.1109/TPAMI.2018.2798607.

D. Lahat, T. Adali, C. Jutten, “Multimodal data fusion: an overview of methods, challenges, and prospects,” Proceedings of the IEEE, vol. 103, no. 9, pp. 1449–1477, 2015. DOI: 10.1109/JPROC.2015.2460697.

P. Dawid, “Present position and potential developments: Some personal views: Statistical theory: the prequential approach,” Journal of the Royal Statistical Society. Series A (General), vol. 147, no. 2, pp. 278–292, 1984. DOI: 10.2307/2981683.

A. Bifet, J. Montiel, J. Read et al., Machine Learning for Data Streams with Practical Examples in MOA, MIT Press, 2018.

J. Gama, I. Žliobaitė, A. Bifet, M. Pechenizkiy, A. Bouchachia, “A survey on concept drift adaptation,” ACM Computing Surveys, vol. 46, no. 4, Art. 44, 2014. DOI: 10.1145/2523813.

J. Lu, A. Liu, F. Dong, F. Gu, J. Gama, G. Zhang, “Learning under concept drift: A review,” IEEE Transactions on Knowledge and Data Engineering, vol. 31, no. 12, pp. 2346–2363, 2019. DOI: 10.1109/ TKDE.2018.2876857.

R. S. M. Barros, S. G. T. C. Santos, “An overview and comprehensive comparison of ensembles for concept drift,” Information Fusion, vol. 52, pp. 213–244, 2019. DOI: 10.1016/j.inffus.2019.03.006.

A. Kendall, Y. Gal, “What uncertainties do we need in Bayesian deep learning for computer vision?,” Advances in Neural Information Processing Systems, 2017. arXiv:1703.04977.

Y. Ovadia, E. Fertig, J. Ren et al., “Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift,” Advances in Neural Information Processing Systems, 2019. arXiv:1906.02530.

E. Hüllermeier, W. Waegeman, “Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods,” Machine Learning, vol. 110, pp. 457–506, 2021. DOI: 10.1007/s10994-021-05946-3.

C. Guo, G. Pleiss, Y. Sun, K. Q. Weinberger, “On calibration of modern neural networks,” in Proc. 34th International Conference on Machine Learning (ICML), 2017, pp. 1321–1330.

J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in Large Margin Classifiers, A. J. Smola et al., Eds. MIT Press, 1999, pp. 61–74.

B. Zadrozny, C. Elkan, “Transforming classifier scores into accurate multiclass probability estimates,” in Proc. 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2002, pp. 694–699. DOI: 10.1145/775047.775151.

A. Niculescu-Mizil, R. Caruana, “Predicting good probabilities with supervised learning,” in Proc. 22nd International Conference on Machine Learning (ICML), 2005, pp. 625–632. DOI: 10.1145/1102351.1102430.

M. Kull, T. M. Silva Filho, P. Flach, “Beta calibration: a well-founded and easily implemented improvement on logistic calibration for binary classifiers,” in Proc. 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017, pp. 623–631.

J. Vaicenavicius, D. Widmann, C. Andersson et al., “Evaluating model calibration in classification,” in Proc. 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019, pp. 3459–3467.

T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition Letters, vol. 27, no. 8, pp. 861–874, 2006. DOI: 10.1016/j.patrec.2005.10.010.

Published

2026-04-28

How to Cite

Uzun, I., & Lobachev, M. (2026). ONLINE RELIABILITY ESTIMATION OF SOURCES IN STREAMING ANALYSIS OF MULTIMODAL TIME SERIES WITH ISOTONIC REGRESSION CALIBRATION. Scientific Papers of Donetsk National Technical University. Series: “Computer Engineering and Automation", 4(6(38), 54–62. https://doi.org/10.32782/2786-9024/v4i6(38).359293

Issue

Section

Information Technology