Dynamic Trust Score Explanation and Adjustment in Zero Trust Architecture Using Large Language Models

Authors

  • Digvijay Parmar Independent Researcher, USA

Keywords:

Dynamic trust, Zero Trust, Large Language, Signal aggregation, Adaptive enforcement, Explainable AI

Abstract

Dynamic trust scoring in ZTA enables frequent risk assessment using constant inputs from various security sources. Multiple data sources are combined to compute a continuously updated score reflecting the level of trust in a given access or transaction. The study utilizes explainable Large Language Models (LLMs) to generate comprehensible explanations for why trust levels are altered. The model leverages a RAM pipeline to consolidate diverse security signals, enhance them with contextual data, and generate human-readable justifications explaining trust score updates. The system associates the explanations with corresponding ZTA policies, allowing it to perform security measures like two-factor authentication prompts, elimination of access requests, and segregation of devices. Practical applications have demonstrated that the approach successfully handles suspicious login attempts and identifies misuse of critical assets. Adding LLM-generated explanations to ZTA has shown to improve the timeliness and accuracy of security decisions and makes the system better prepared for emerging cyber risks and threats.

Downloads

Download data is not yet available.

Downloads

Published

27-04-2025

How to Cite

[1]
“Dynamic Trust Score Explanation and Adjustment in Zero Trust Architecture Using Large Language Models”, J. of Art. Int. Research, vol. 5, no. 1, pp. 1–30, Apr. 2025, Accessed: Mar. 07, 2026. [Online]. Available: https://thesciencebrigade.org/JAIR/article/view/611