I ran a Quillit query twice using the same question on the same transcript, with no other changes, and the output varied in both length and level of detail. Is that normal? What is the reason that occurs?

Variations in length and detail are something that happens with generative AI models such as Quillit. Generative AI models are powerful statistical models that deliver an element of randomness to their output, and as a result, it is really not possible to ensure results are exactly the same if you re-run the same query on the same transcript, video, or audio file multiple times. Quillit accommodates for this by enabling users to run the same query on the same content up to three times. Users can then choose the version they deem to be the best or combine versions when using the output in a report.