How To Add Text Generation To A Bubble App With OpenAI (Completions API)

Are you looking to add an AI-powered text generation feature to your Bubble app?

In this guide, you’ll learn how to integrate OpenAI’s text generation model with Bubble. Using the Completions API, you’ll be able to give your app the same AI superpowers that are used in ChatGPT.

The steps to integrate the Completions API with Bubble include:

  1. Generating your own OpenAI API keys
  2. How to easily review the OpenAI API documentation
  3. Creating an API call from Bubble to OpenAI
  4. Designing the UI of your Bubble app
  5. Building the workflows that power the feature

In our example today, we’ve created an app that generates love poems based on a users personal relationship. 

Let’s run a preview of our app, then input some information. We can say our partner’s name is Lucy, and our anniversary is September 14th, 2023. 

When it comes to our favorite qualities about Lucy, we’re just going to paste in some options that we’ve already created. We’re going to say Lucy is caring, she has a great sense of humor, she’s compassionate, and she has a plump bubble butt. Look, we’re going to keep it cheeky.

When we then send this information to our API, it’s going to generate a limerick poem for us. And look, let me tell you, this is straight fire. Let’s just give a little taste of what it’s created there:

“Once was a girl named Lucy,

Whose love for me was so juicy.

Her caring ways, her humorous plays,

Made our heart feel oh so floozy”.

Now, look, we just have to say, if you’re not a believer in AI, surely this is enough to convert you over to the dark side. So, as you can see, we’ve created this application here today. But look, regardless of whatever type of app or scenario you’re building for, what we’re going to teach you can be applied across any different use case. So, let’s get started by jumping back into our main checklist here after showing you a quick demo of our product.

Full Transcript of Tutorial

1. Generating your own OpenAI API keys

Alright, now we can build out the connection between Bubble and OpenAI, and if we can be honest, this is where most of the hard work is. Throughout our tutorial today, particularly if you are relatively new to Bubble or you’ve never built out an API connection, this can seem quite overwhelming, and trust me, we completely understand. 

When we used to think about connecting APIs, the first thing that would pop into our mind was code, and of course, creating code is the enemy of any Bubble developer. But we’re here to assure you that it is not. Thankfully, the process of integrating the OpenAI API is super straightforward and simple. In fact, in our opinion, you don’t even need to know how to code to be able to do it because it is truly that easy. And look, it’s our job to make it as simple as possible for you.

So, the first thing we’re going to do is jump into Bubble. Now, we’ve already pre-built this page here, and we’ll show you how that’s built out in a moment. But what we’re interested in doing right now is jumping over to our ‘Plugins’ tab and installing the plugin that allows me to create an API connection.

Now, an API, also we should say if you’re not familiar with the term, is just a way of connecting two different services together. So, if you’ve got your Bubble app over here and OpenAI on the other side, an API would connect these two platforms and allow you to send and receive data between them, which is exactly what we want to do. We want to send a prompt from our Bubble app through to OpenAI. We want it to generate some text, and then send that text back through to our Bubble app.

So, in order to do that, we need to create that API connection. Let’s add a plugin here, and the plugin we’re going to add is the ‘API Connector’, which is the first plugin on our list. And of course, this is a free plugin built by Bubble. We’re going to choose to install that. we can then close this library, and we’re now going to add our first API.

So, your API here is the name of the service that you want to connect to, and in this case, it’s going to be ‘OpenAI’. So, this is the overall service, not the specific API call. For instance, the API call today is going to be referencing their text generator, but that’s going to sit inside of this overall service.

Now, when you’re creating your first API, what you need to do is create some sort of way to authenticate that. And what do we mean? When it comes to connecting services with APIs, you need to generate what’s known as an ‘API key’, which, as the name would suggest, is literally like having a key that opens a door between our two platforms. So, let’s say there’s a door in the middle. We’ve got a key. We’re going to open that up, and it will allow you to communicate between both of your services. It’s essentially just a way of Bubble verifying your OpenAI account and your OpenAI account verifying your Bubble account. And so, how can we do this? 

2. Reviewing the OpenAI API documentation

Back inside our checklist here, you’ll see that we’ve included a link to the ‘OpenAI documentation’. So, if we click on this link, it’s going to take us through to a page that highlights everything we need to connect today. And you don’t need to worry about any of this here, create your own OpenAI account if you have not already. If you have, what you can then do is head over to the left-hand menu and select the API Keys.

Now, as you’ll see, we’ve already created a bunch of different API keys for some previous tutorials that we built out. But if you don’t have an API key, what you’ll need to do is create a brand new secret key of your own, and you can call this whatever you want. Today, we’re just going to call ours “Text-Generator-Tutorial”. And then we’re going to choose to create this.

You’ll then need to make a copy of your API key and jump back into Bubble, and when it comes to this ‘Authentication’ here, what you’ll need to do is open up this drop-down menu and select the “Private Key in Header” option. Now, how do we know that we should select this? That’s a great question, and we’re glad you asked. If we were to jump back into our OpenAI account and then just quickly revert back to the ‘Documentation’ page that we’d mentioned before, this is where we can start to look at the code, that’s going to power our ‘API call’.

Now, inside this little code snippet, we can see a bunch of different information, and look, please don’t stress if you don’t know how to read this. We’re going to explain it to you in plain English. But there is one key bit of information that we can see at the top here, next to this little “H” which is known as a header. You can see that there’s a field here known as the ‘Authorization’. So, that is how we know that the authorization is in the header. And look, most API keys follow this exact same practice, so it was pretty straightforward to understand how we needed that.

So, if we jump back into Bubble, we’re going to select that the private key is in the header, and as you’ll now see, Bubble’s automatically going to populate this key name with the word “Authorization”, which is the same value that we have right here. Perfect. All we now need to do is copy across our API key, and as you can see, OpenAI has added some dummy text where it recommends you need to add in your own API key. So, what you actually need to do is highlight the word “Bearer” as well as this dummy text. So, the word “Bearer” essentially just means that you are the person who bears this API key; you are the owner of it. So, we’re going to copy this across, jump into Bubble, and we’re going to paste this into the ‘Key Value’ field here. Now, you will need to paste this exactly how it’s spelled in OpenAI documentation.

So, the word “Bearer” should have a capital “B”, and there should be a space behind it before you then add in your own API key. Now, once again, it is pretty standard for API keys to require you to add in the word “Bearer” and a space, so this isn’t anything new or revolutionary; it’s just a pretty common practice.

What we now need to do, though, is highlight from the dollar symbol onwards, and that’s where you need to paste in your own API key. So, we’re going to paste in our OpenAI key there, and just like that, we’ve now authenticated a connection between Bubble and our very own OpenAI account.

If we jump back over into our checklist, what we can actually do is tick off that we finished installing the API, we created our first API service, we then generated our OpenAI key, and we are now reviewing the documentation.

The next thing we need to do on our list is to create the very first ‘API call’. So, “the call”, as the name would suggest, is going to send a message from Bubble through to our OpenAI account, and it’s just going to send with it some information which is going to be like a request. So, it’s essentially picking up the phone and calling OpenAI and saying, 

“Hey, here’s a prompt; can you please turn this into more text?”

3. Creating an API call from Bubble to OpenAI

So, if we jump back into Bubble here and scroll down inside of our overall API, you’ll see the option to add in your first API call. So, if we click expand here, you can customize the name of this call to whatever you would like. We’re going to call this “Text Generator”.

When it comes to this call, we need to make a few minor tweaks to all of the settings here. So, the way we’re going to use this API call is actually as an ‘Action’, not ‘Data’. And what on Earth is the difference between these two options? When you’re working with APIs, you’re most likely going to want to pull information from a service or send information to a service. 

So, a great example of the first scenario is if you’re building something like a stock trading platform; you might want to use a third-party service that provides real-time values of stock prices. And so, in that case, you’re pulling data into your application. So, that’s when you would use the ‘Data’ option. However, today, we’re going to be sending data through to OpenAI; we’re going to be sending through a prompt, and we’re also going to be telling it which OpenAI service we’d like to use. So, in this case, it’s a text generator. So, that’s going to be an ‘Action’, and that will allow us to reference this within a workflow.

When it comes to the ‘Data type’, we’re just going to leave this as the standard ‘JSON’ option, which is essentially just a fancy way of saying that we can format some of the text we’re going to send through. And then, for the actual way, we’re going to use this API, it’s going to be a “POST” because, as we mentioned before, similar to an action, we’re going to be posting information somewhere. But where on Earth are we going to be posting that information?

So, if the user types in a prompt, we need to be able to send that to the OpenAI service. But where on Earth does that service live on the Internet? It’s kind of just like if you were to send a letter to OpenAI’s head office; you would need their address. And thankfully, that address has been provided inside of the ‘API documentation’.

So, at the top of our AP, you can see the URL for the completions API, and that’s the text generator that we’re going to use today. So, we’re going to copy that URL, jump back into Bubble, and paste this into this field. And that is all you’ll need to change. The only other thing we’ll need to do here, though, is just add an additional header. The reason for that is that if we jump back over to our ‘Documentation’, remember how we mentioned there was a header for the authorization? we can see there’s another header here for the ‘Content-type’. So, this just confirms what type of information we’re sending through to OpenAI, and in this case, it’s just going to be “application/JSON”.

So, what we’re going to do is actually copy across the word “Content-type”, we’ll jump back into Bubble, we’re going to add a header, and we’re going to paste this in as the key. Then, for the value, we’re going to copy across the “application/JSON” text, copy that, and paste this into the ‘Value’ field.

Now, we’ve previously seen scenarios where some people will add the header value in the actual overall API key, but as you can see here, you would be adding that as ‘Add a shared header’. So, what does that mean? Inside our overall OpenAI API, you’re not just limited to adding one particular service. So, we’re using the completions API, which is the text generator. But what happens if we also want to use ‘DALL-E’ for the image generation or ‘Whisper’ for the speech creation? If you want to add more than one service or more than one API call, you have the ability to do that by adding additional calls.

And if you were to set the ‘Content-type’ as a shared header, that means that every single service inside of your overall API will share that exact same value. And look, that is not true; you don’t want that to be the case. If you just have one API call inside of your overall API, that’s fine. But when you’ve got multiple different services that store data in multiple different ways, you’re going to run into an issue.

A great example is if you were to use the ‘Whisper API’. When you’re creating audio files, you don’t use ‘JSON’; you use a file. So that’s why we personally just like to add the content type as an individual header for each individual API call. Something else we should point out is that when it comes to this header, we’re going to make sure that this is still ticked as ‘Private’, which just means that we won’t be able to change that within our workflows, and that’s completely fine with me. We will always want it to be this exact value here.

And that is the very last thing we need to build out for our setup, and from here, we can scroll on down to the fun part, in our opinion. So, if we jump over to the ‘Documentation’, this is where we can copy and paste across all of the additional value within this API call, and so this is all the juicy stuff that’s going to power this API call, and these are known as your ‘Parameters’.

A parameter is just a fancy way of saying it’s a piece of information that you want to send through. It’s kind of like how you use URL parameters inside a bubble. What you’ll find is you have your ‘Parameter key’, and then you have a value for that. So, we can see there are three parameters. First of all, there is the model. We’re going to use it in our tutorial today; we’re just going to be using the ‘GPT 3.5 turbo’. The reason we’re using this is because we know this is the free version, so if you don’t have a paid account, you’ll still be able to follow along. Of course, if you wanted to use the ‘GPT 4’ model, you could just replace 3.5 with 4.

But then, we can also see that there’s a parameter for the ‘message’, and this, of course, is where we’re going to be able to type in our prompt. And then, there is the temperature, which is essentially just the accuracy of the model.

So, what we need to do is select from the open bracket here and highlight all the way down to the closed bracket. We’re going to then make a copy of this. We’ll jump back into Bubble, and we’re going to paste this into our ‘Parameters’ field or ‘Body’. So, this is the actual JSON of the API request.

And at this point in time, after pasting that in, you would have a successful API call. The only problem is that when you initiate this call, it’s going to send through this exact information here. Now, why is that a problem right now? This is all ‘Static’ text, and if there’s one thing you probably learned from Bubble, it’s the difference between ‘Static’ and ‘Dynamic’ data. This static text is going to send through the exact same information for every single API call. So, regardless of whatever prompt a user types in, it’s always just going to send this exact prompt here, which just asks the model to say that this is a test.

What we want to do today is, of course, create a way for our users to be able to type in their own custom prompts. And so, how can we do that? If you look above the input field here, Bubble allows you to create ‘Dynamic’ parameters or ‘Dynamic’ values, we should say, and the way you do this is by adding the less than (<) and the greater than (>) symbol and your dynamic value in between.

So, let’s say for the prompt that we want to send through, we want it to be ‘Dynamic’. So, we want to constantly change it based on the prompt that a user types in. So, in our love letter app, that would be the information about their partner’s name, their anniversary date, and the features that they like about their partner. And so, if we needed to create a dynamic value for our prompt, that is known as the ‘Content parameter’.

Now, how do we know that? Inside of our Notion checklist, we’ve also added a link here which just outlines what every single parameter means in OpenAI’s separate documentation. So, if you scroll down, you’ll see parameters like ‘The model’; there’s also the ‘User parameter’, and there’s also things like ‘The temperature’, which, as it just describes, is a way for you to determine how finely tuned you want your model.

Now, something we should also quickly point out for those people wanting to use ‘GPT 4’, not ‘3.5 turbo’, is that over on the right-hand side here, you can see another example of an API request, and right now, you’ll notice that this is linked to the ‘GPT 3.5 turbo’ model. But what you can do is open this dropdown menu and select whatever additional model you would like. So, if you want to use ‘GPT 4’, this is how you should format the value for the parameter of the model, and of course, you can just copy and paste that across and replace it with any of the existing values that we’ve added inside of our ‘JSON’ text here.

So, we’d really recommend you take the time to read that ‘Documentation’ page as it is incredibly helpful to make sense of all of this information here inside of what looks like a little bit of a confusing code snippet. But we apologize because we’ve digressed back to our main goal here. We just want to make sure that this prompt can be ‘Dynamic’. So, we’re going to highlight all of the text inside of our quotation marks. We’re going to add a less-than symbol (<), and we’re going to type in a name for our dynamic value, and we’re just going to call this “The Prompt”. we’ll then add a greater-than (>) symbol, and what you’ll see is when we click away, it’s now created a dynamic value. So, it’s not only highlighted in green, but the key name for this is “Prompt”. And when it comes to the ‘Value’, we’ll just need to add in a test value. So, we can have something like “Tell me a joke”.

Now, for this dynamic parameter, you will need to unselect that this should be ‘Private’. And what that’s going to allow you to do is make changes to this in the workflow that we’re about to create that sends this through to OpenAI. For our tutorial today, this is the only dynamic parameter we’re going to add in. But of course, you can make dynamic parameters for things like ‘The temperature’ ‘The role’, or even ‘The model’. But we want all of those values to be the exact same. It’s only the prompt that we want to change. You do even have the option of updating the temperature manually, so you could say, for instance, you want this to be 0.2 just so that way it’s more accurate, but it will use more of your ‘OpenAI tokens’. So, we’re just going to leave ours at 0.7. We’re quite happy about that now. We’re quite happy with that. Now, after building out all of this JSON here, we’re going to initialize our ‘API call’. If your API call is successful, you’re going to see this pop-up display, and this is just going to map out all of the data that you’re going to store when OpenAI sends text back to you. 

We’re quite happy with all of the default settings here, so we’re going to choose to save this. One thing we should just point out, though, is that if you see an error message telling you to add credits to your OpenAI account, what you need to do is just go into OpenAI, add in your billing details, and purchase some OpenAI credits or tokens. When we’re recording our tutorials, we just add $10 there; it just allows me to play around and tinker as much as we want, and we find that $10 is more than enough to do that.

Now, look, just like that, that is absolutely everything we need to cover when it comes to building out our actual API. So, let’s jump back into our Notion checklist, and we can tick off that we finished building out the API call itself, which is going to power our whole experience today. And from here, this is where the fun part begins.

4. Designing the UI of your Bubble app

We can now review the way we’ve set up our app and then build out the workflow that’s going to power this entire experience. So, let’s jump back into our Bubble editor. In this case, we’re going to open up our ‘Design’ tab, and this is where we can break down how we’ve built out the world’s most revolutionary product known as ‘Lover AI’.

Now, obviously, your use case is probably going to be different from ours today. Although we haven’t actually put a patent on ‘Lover AI’, we’re going to assume that you’re probably not going to build an app like this yourself. You might be building something like a blogging platform, and you want to be able to generate text for people. But we think it’s worth just highlighting how we’ve built out our experience.

So, obviously, on our page here, we have three different input fields. There are two text fields, and these are just standard input fields where you can add your partner’s name and your favorite qualities about your partner. We then also have a date-time picker, which of course just allows someone to select their anniversary day. There’s nothing too special, but what we’re going to do is aggregate all of that information together and we’re going to send through a custom prompt to our model and we’re going to do this whenever the ‘Write Letter’ button is clicked so let’s choose to add a workflow within this workflow, it’s actually pretty straightforward. All we need to do is select an action from our ‘Plugins’ menu. As you can see, we can now reference our ‘OpenAI text generator’, which is referencing the completions API. Now, from here, this is where we can type in our custom prompt.

In this case, we would tell OpenAI that it is, let’s say, a romantic novel writer, and we wanted to create a poem about a particular person whose anniversary date is on a specific date and who possesses certain qualities. One thing we find is that when you’re typing lots of text into this field, it’s a little hard to see everything you’ve written. So, just a personal tip of ours is to ‘Insert Dynamic data’ and just type in “arbitrary text”. What that’s going to do is just allow you to have more space here to type in more text, and all of that value will essentially just be normal text. But we personally just like to build our prompt out using this way.

5. Building the Workflows that power the feature

So, in most use cases, what you’ll find is that when you’re sending a prompt through to OpenAI, you might just want to paste across the user’s text they’ve added themselves. But today, what we want to do is explain to you how you can create a custom prompt. For instance, if you want to give GPT a particular persona that it needs to follow or give it any additional information, it’s just going to allow you to create a much better end result than you get back in comparison to if you were to just straight-up send across the prompt the user typed. Particularly in our example today, our information is actually stored across three different input fields. It’s not just one input field, so we need to mash all of that together.

What we’ve done is we’ve created a bit of a custom prompt, and we’re just going to paste the start of this in. So it starts by saying, “You are an expert writer who specializes in writing quirky romantic limerick poems. Write a love letter in the form of a limerick poem for…” and then from here, we want to add the name of the person’s partner. So, we’re going to ‘Insert dynamic data’ and reference our ‘Input-partner-name’, the value there. If someone typed in the word “Lucy”, it would display the word “Lucy”. we’re then going to click away, add a space, and then from here, we’re going to paste in the second part of our prompt. It’s going to say, “Write a limerick poem for their name about how much they mean to you ever since you met them on…” and we’re now going to add in their anniversary date. So, we’re going to ‘Insert dynamic data’ and reference our ‘date-time picker/ anniversary’, and its value. We’ll then click back in, hit space, and actually, we’re going to delete that space because our next part of the prompt has a comma. So, 

“Ever since you met them on the date they’ve meant the world to you. Your favorite qualities about them are…” Then we’re going to reference our ‘Input qualities’ and their value. And that is the very last bit of our prompt. So, we’re going to add a full stop for a nice measure. This right here is our own custom prompt that’s just going to create a much better end result. So, you can add whatever you want into your own example today.

At this point in time, this step in our workflow would run, and it would send that data through to the completions API service. One thing we really just need to point out is the way in which this text is going to be formatted. If you remember inside of our documentation for this particular API call, it was storing this data as ‘JSON’. So ‘JSON’ looks exactly like this, but ‘JSON’ is very particular or pedantic, we should say, about the way in which you spell things like spaces or particular characters.

Typically, what people would do is they would select the “More” option after their prompt and they would type in the word “JSON” and choose to format this as ‘JSON safe’. What that’s going to ensure is that it will remove any unnecessary characters and make sure it’s in a JSON-readable format. But the problem with this is that another thing it does is add quotation marks at the front and the end of our prompt. So, again, over in our documentation here, you can see that the prompt needs to be in quotation marks. So, by formatting this as ‘JSON’, it’s going to automatically do that for you.

But where the problem arises is that if we look back over our ‘Plugins’ tab, we’ve already added quotations to this dynamic value. So, if we format that as ‘JSON safe’, it’s going to essentially add another round of quotation marks to it, which will then throw up an error message for you. Now, look, it is best practice to add this “format as JSON”, but what we’ve just found is that if we add this in, we personally get an error message because OpenAI can’t read it because it’s got too many quotation marks. So, we’re personally just going to remove this today. But if you find you don’t get the error message when you run that, that is completely fine. But for me, we’re just going to leave our prompt here.

After a prompt has been sent and OpenAI has sent back some text, what we need to do is obviously display that on our page. And the way we’ve done this back in our ‘Design’ tab, we’ve added a hidden text element, which is going to be hidden by default. So, this just says 

“Your love letter”. 

And then what we’ve done is we’ve just created a custom state on our page, which is going to store some text in it. Today, we’re not here to teach you about custom states. If you’re not familiar with them, we do have a dedicated tutorial that explains it. But, look, the quick TLDR version is that they’re just a way of temporarily storing data on your page, not in your database. Because every single time we request a love letter, we personally don’t want to create an entry in our database. But we will show you how we can do that in a moment.

What we’ve just done on our page is we’ve just opened up our ‘Element inspector’ and created the custom state called “GPT response”. This is just a text value. It couldn’t get simpler than that. Over in our ‘Workflow’ tab, after we receive a text back, we’re going to set the ‘State’ of an element. That element will, of course, be our overall page, and the custom state will be the “GPT response”. For the ‘Value’ of this state, it needs to be text, so we’re going to reference ‘The result of Step One’, which is where we sent and received data from OpenAI. We’ll then need to reference the “Choices” field and look, this is going to send this back as a list of text. So, what we’re going to do is reference ‘each item’s message content’, and then we’re just going to select ‘The first item’ because in our use case today, there’s only going to be one particular item, and that is the one poem that it’s going to write.

Just like that, that is exactly how you can build out the workflow to not only generate but then also store data in a custom state. Look, if you wanted to store this in your database, if you wanted to create a new thing, all you’d need to do is head to your ‘Data’ tab and pretty much replicate the exact same step that we just showed you. So, you create a thing, let’s say you wanted to create a blog post, and you wanted to store this as the ‘Body’ text. All you’d need to do is, once again, just follow the exact same steps that we’ve shown you. So, you’d reference ‘The result of Step One’, ‘The choices’, ‘Each item’s message content’, and ‘The first item’ there. That is exactly how you could store that text in your database.

But look, we’re going to delete that because we don’t need to store it in our database today. Instead, we’re happy to just keep it in our custom state. Let’s run a preview of our model though and see how this is going to look and feel.

In a preview of our app, what we’re going to do is just type in the partner’s name. We’re going to say it’s Lucy. Once again, we’re going to select our anniversary. It was pretty recent, let’s say it was December 14th. And we’ve changed our minds about our favorite qualities about Lucy. In fact, this time we like her smokey brown eyes, her buck teeth, and her great sense of humor. So, what we’re going to do is choose to write a letter here. It’s going to send that request through to OpenAI. It’s going to run that completions model through that prompt, and it’s now going to return to us yet another straight-fire poem. Let’s scroll down and have a look.

“Oh, Lucy, with eyes like smoke in the night, and teeth that just make our heart take flight. Your humor is so grand, we’re forever your fans. Since we met, you’ve been our delight”.

And look, ladies and gentlemen, if you ever spend a Valentine’s Day alone again, you truly have no one else to blame but yourself, because today we’ve taught you how to create possibly the best secret weapon to the whole dating game in the world, and that is our greatest creation, ‘Lover AI’.

From here though, what we want to do is just jump back into our Notion checklist and tick off that we finished showing you how we designed this application and of course, how we built out the workflows to make the entire thing functional. And just like that, you now know how to integrate OpenAI’s text generation model directly within your own Bubble app. As you can see, the whole process wasn’t too complex; it wasn’t anything that we couldn’t handle inside Bubble.

Never miss a course 👇