Interesting. For all this talk about fixing “hallucinations” when dealing LLMs (there’s even an entire conference on it from Pinecone next month), I’m noticing a trend where users actually miss the more “imaginative” assistant–even if it was wrong sometimes.
James R. Hull
@jhull