In my day job, I am currently working on a GPT-powered web application running on Supabase that makes heavy use of the built-in Edge Functions. These Edge Functions run on Deno Deploy, so I learned a lot about Deno and its ecosystem.
More for a demo than actual needed functionality, I tried to mimic ChatGPT’s “typewriter” visual effect when chunks come back from the streaming OpenAI API. I was surprised how easy it was to get something working with Deno’s built-in Web Streams API.
The server part
Deno’s built-in fetch' and
ReadableStream’ implementations make it really easy to get started.
First, create a function to interact with the OpenAI API. The createChatCompletion()
function takes a prompt and returns a response promise of type Promise<Response>
from the fetch
request. Be sure to enable streaming by setting the stream
property in the request body to true
. This way OpenAI will also stream chunks of data as they are available.
Also note that this is not an async function that waits for the response. Instead, we will pass the return value directly back to the response of our edge function to handle incoming chunks of data on the client side.
Handle the request
To handle incoming /chat
requests, use the serve handler from Deno’s http
implementation.
For better readability, this is just an abbreviated outline of the most important part, which is returning the response body. We await the response from the OpenAI API and return the body itself which is of type ReadableStream<Uint8Array> | null
.
While the response itself is finished, the readable stream from the body will see incoming chunks of data as they become available. The client will then be able to handle these chunks and display them in the UI accordingly.
The client part
On the client side, we can use the native fetch
API to retrieve the chat completion response, grab a reader from the response body that we just sent from the server and use it to process incoming data chunks.
In this example, we use Vue.js to handle the UI and state management, but the same principle applies to any other framework or vanilla JavaScript.
The important steps are:
- Grab the reader from the stream:
response.body.getReader()
- Decode the incoming data chunks using the
TextDecoder
constructor:const decoder = new TextDecoder('utf-8')
- Read the incoming data chunks using
reader.read()
:const { done, value } = await reader.read()
- Split the incoming data chunks into separate messages using the newline character as a separator:
.split('\n\n')
- Parse the messages into JSON using the
JSON.parse()
method:.map((msg) => JSON.parse(msg))
- Retrieve the content that needs to be appended by following the OpenAI docs:
msg.choices[0].delta.content
Conclusion
I was surprised at how easy it was to get something working with Deno’s built-in Web Streams API. I am sure there are many more use cases for this and I look forward to exploring them.