LLMs
Large Language Models like GPT-4 is the whole reason LangStream exists, we want to build on top of LLMs to construct an application. After learning the Stream Basics, it should be clear how you can wrap any LLM in a Stream
, you just need to produce an AsyncGenerator
out of their output. However, LangStream already come with some LLM streams out of the box to make it easier.
Like other things that are not part of the core of the library, they live under langstream.contrib
. Go ahead for OpenAI examples.