Within the Unified Inbox, you will see a View Conversation Logs option inside the Chat Insights panel on the right side of the screen.
When you click this button, a panel will open displaying detailed insights into what information and sources were used to generate the AI’s response.
Within this panel, you will see a list of all questions asked during that specific conversation. When you click on an individual question, the following sections will appear:
Conversation Flow Instructions
Knowledge Hub Extracted Response
History
Response
Insights
Conversation Flow Instructions
Under Conversation Flow Instructions, you will see the specific instructions pulled from the assigned conversation flow.
This section is helpful for confirming that the AI is referencing the correct configuration and for troubleshooting scenarios where the instructions were retrieved but the response does not appear to fully align with those rules.
Knowledge Hub Extracted Response
When the AI responds to a question, it retrieves all relevant information available in the Knowledge Hub related to the guest’s inquiry and associated topics.
Under Knowledge Hub Extracted Response, you can view the exact content the AI pulled from the Knowledge Hub to formulate its answer.
For example, if the AI provides an incorrect or unexpected answer, you can review the Knowledge Hub Extracted Response section to see exactly what information it pulled in order to understand how the response was generated.
The Myma.ai team is currently enhancing this section to provide clearer source attribution, so you will be able to identify precisely which area of the Knowledge Hub — such as External Sources, FAQs, or Documents — the response originated from.
History
When the AI generates a response, it also considers the context of the ongoing conversation.
Under History, you can view the previous messages exchanged in that chat, along with the AI’s corresponding responses, allowing you to understand how context influenced the answer.
Response
Under Response, you will see the AI’s internal analysis of the conversation, including how it formulated its answer and how it classified the overall status of the interaction.
Insights
Lastly, at the bottom of this panel, you will see three icons representing key performance metrics related to how the response was generated.
Tokens Used
This reflects the total number of tokens processed during the interaction. Tokens are the small units of text (words or parts of words) that the AI reads and generates. This metric includes both the input (guest message, conversation history, and instructions) and the output (the AI’s response). Monitoring token usage helps evaluate response complexity and overall processing load.Document Search Time
This indicates how long it took the system to search the Knowledge Hub and retrieve relevant content related to the guest’s question. It reflects the time spent scanning FAQs, external sources, and documents before the AI generated its response. Longer times may indicate broader or more complex searches.LLM Response Time
This measures how long the Large Language Model (LLM) took to generate the final response after receiving the retrieved information and instructions. This metric reflects the AI’s processing time and can help identify latency related to response generation rather than knowledge retrieval.







