AI Image Generation In Bubble With OpenAI’s DALL-E 3 (Complete Guide)

If you’re looking to create an AI-powered image generation feature inside your Bubble app, then this is the guide for you. In this article, you’ll learn everything you need to know about using OpenAI’s DALL-E 3 with Bubble.

The steps to connecting DALL-3 with Bubble include:

  1. Generating your own OpenAI API keys
  2. How to easily review the OpenAI API documentation
  3. Creating an API call from Bubble to OpenAI
  4. Designing the UI of your Bubble app
  5. Building the workflows that power the feature

At the top of our list today, the very first thing we’d like to do is walk you through a demo of exactly what we’re going to build. So, if we jump over into a browser, what you’ll see is that we have a bit of an application that we’ve already created.

In our example today, we’re going to create almost like a blogging platform. Users will be able to create blog posts, and within each blog post, they can add a title, a body, and also a thumbnail image. But what happens if a user doesn’t have a thumbnail image that they’d like to use? That’s where we can leverage generative AI.

Below our picture uploader, you’ll see this text that says,

“Don’t have an image? Generate a custom thumbnail here”. 

When we click on this, it opens up this drop-down menu. This is where our users can add in the prompt for the AI image they would like to generate.

We’ve already taken the time to write a custom prompt out today, so we’re going to paste this in. What we want is a photorealistic image of the 

“Transformer Optimus Prime on a cruise ship, drinking a cocktail, surrounded by fans wanting his autograph and taking pictures”.

We’ll also select that our image should be square, so it’s going to be ‘1024×1024’ pixels. We’ll then choose to generate this. Bubble’s going to send that through to OpenAI. It’s going to work its magic, and as you’ll see, it’s just generated the exact image that we’re looking for.

So, how did we get to this point? We can jump back over to our main browser here. First of all, let’s tick off that we’ve finished showing you a quick demo of our product. Now, from here, this is where the real work begins.

Full Transcript of Tutorial

1. Generating your own OpenAI API keys

When we’re creating an integration with OpenAI, particularly the image generation model ‘DALL-E 3’, what we need to use is the API connector inside Bubble. This allows us to connect these two different platforms. Let’s open up a brand new Bubble editor.

Now, we’ve already taken the time to design the interface of our app, but we’re not interested in showing you that right now. We’ll do that later on. What we need to do right now is actually open up our ‘Plugins’ tab. As you’ll see, we’ve already taken the time to install the plugin known as the ‘API connector’. If you don’t have this, you’ll need to do this now. Just open up your ‘Plugin’ library, and the very first plugin is the API connector, which is a free plugin built by Bubble.

At this point in time, we can now go ahead and create the first API that we want to connect. So, we’re going to select the option to ‘Add another API’. The first thing we’ll need to do is give this a name. We’re going to call this “OpenAI”. The name of this API is the service that we’re going to connect with, the overall platform.

Once we’ve given a name to this API, we now need to create a connection using an API key. If you’re not familiar with working with APIs, please don’t stress. An API is just essentially a way of creating a connection between two different services. 

So, let’s say over here we have Bubble, and over here we have OpenAI. 

We need to create a way for these two services to send and receive data between each other. In order to create that connection, we need to source what’s known as an ‘API key’.

As the name would suggest, a key is like something that opens up a door. It’s going to open up a gateway or a pathway between these two platforms.

2. Reviewing the OpenAI API documentation

So, in order to source your API key, you’re going to need to create an OpenAI account. If we jump back over into our checklist here, you’ll see that we’ve included a link to the OpenAI documentation. What you’ll need to do is click on this, and it’s going to open up a tab and take you through to the documentation page that we’re going to be following today.

But before we follow any of this documentation, what we need to do is source our very own ‘API key’. If you head on over to the left-hand menu, you’ll see there’s an option to view your API Keys. Now, at this point in time, we already have a couple of API keys because we’ve used them for previous tutorials. But if you don’t have one, or if you don’t even have an OpenAI account, please just take the time to register an account and then create your own ‘API key’.

So, we’re going to create a secret key right now. When we select this, we’re just going to need to give this a name. We’re going to call ours the “DALL-E-tutorial Key”, but you can call yours whatever you would like. We’re then going to choose to create this secret key. Once we’ve generated that secret key, we’re going to make a copy of this and jump back over into our Bubble editor. What we’ll now need to do is paste this inside of our API, so that way our Bubble application has permission to talk to our OpenAI account.

That’s what that “key” is for. Now, when it comes to adding in your API key, what you’ll need to do is update the way in which Bubble is going to authenticate with OpenAI. So, pretty much, that just means everything we’ve just discussed, being able to create the connection between two services is known as “Authentication” because you’re giving something the authority to connect with a service.

Now, for this step, what we need to do is open up our drop-down menu and select the option known as the “Private Key in Header”. How do we know how to do that? If we just jump back over to our OpenAI account and then revert back to that ‘Documentation’ page we’ve shown you, if we scroll down, you’re going to see this little structured piece of code here.

Although this might look incredibly confusing to you, don’t stress because we’re going to explain what every single thing in this means. And look, if you were to strip away all of the code formatting, like all of the asterisks, brackets, or even all the dashes, there’s actually not too much information in here at all. 

One thing we should point out though is that when you’re viewing this little bit of code, please make sure you’re viewing this as the ‘Curl’ option, not the ‘Python’ or the ‘Node.js’. We’re going to be using the ‘Curl’ format today, and the reason is that the ‘Curl’ format integrates with Bubble much easier. In fact, there are ways that you can copy and paste this Curl across into your Bubble app, but today we want to create this connection manually so we can walk you through all of the steps involved. That way, you actually learn how to interpret this API connection.

Now, back to the point that we were making before we got sidetracked, how do we know which authentication version to choose from our dropdown menu? 

So, the option we selected here was the “Private Key in Header”. If we revert to this code, what you’ll see is that there are a couple of different lines inside of this text. At the top here, there are these lines known as the header values (h), and we can see here that there’s an authorization inside of the header. Next to that authorization is where it says to add your OpenAI key. So, that’s why we know that the ‘Authorization’ is in fact in the header. And once you select that, as you’ll see, Bubble’s automatically going to call this the authorization key. But all we need to do is now paste in the value of our OpenAI key.

But, as you’ll see inside of this code, we’re not just pasting in the actual key value. We also just need to type in the word “Bearer” in front of this API key. Now, look, this is a pretty standard practice for a lot of APIs. It’s not just OpenAI. In our opinion, this is just a way of formatting that you are the bearer of this API key, so you’re the person who owns it. So, what we’re going to do is copy all of this across here. So, we’re going to copy the word “Bearer” as well as where it says to add your OpenAI key. We’ll then jump back into Bubble and paste that inside of our ‘Key value field’.

But what we’ll now need to do is just replace the dummy ‘OpenAI API key’ with our own. So, we’re just going to select here and paste our key in. Now, something we should just highlight is that it is very important that you spell “Bearer” the exact same way that OpenAI has laid it out in their documentation. So, it will need a capital “B” and after the word “Bearer”, you will need to add a space. If you don’t include these two things, your connection will not work. But once you’ve pasted in your API key, you now have the authorization to connect with OpenAI, which look, is a pretty big deal. In fact, what we’d like to do is just jump back into our Notion checklist and tick off that we finished installing the API connector, we created our API connection, grabbed our API key, and we’ve also taken a look at the OpenAI documentation. And this is where the fun part begins.

3. Create an API call from Bubble to OpenAI

Within this API connection, we need to create what’s known as our very first API call. So, if the overall service that we connected to was OpenAI, the call we’re going to be referencing is the particular service inside of OpenAI that we would like to use. And that, of course, is going to be the ‘DALL-E 3’ service. Next up on our list here is building that out. So, if we jump back into Bubble and scroll down after you’ve taken the time to structure your overall API connection, we can now add all of our services inside of it.

So, we need to give a name to our very first service here. We’re going to ‘expand’ this out, and we’re going to call the name of this “DALL-E 3”. After naming this API, we just need to update what type of API call this is going to be. When it comes to APIs, because you’re connecting with third-party services, you’ll most likely want to receive data from those services or send data to those services.

If you open up this drop-down menu, you’re going to see two options. And this is pretty much exactly what we just mentioned. So, if you’re pulling data from a third-party service, like if you’re creating a stock trading platform and you want to pull real-time stock prices, you would need to select the “Data” option because you’re pulling data in. 

However, if you want to send information to a third-party service like we’re doing today, we’re going to be sending a prompt through to the ‘DALL-E 3’ service, this will need to be an ‘Action’. So, we need to select this option here.

When it comes to the ‘Data type’, we’re going to be leaving this as the ‘JSON’ format. So, Bubble’s going to be returning the value in some code, and the way it’s going to return that value is it’s going to send the URL of an image that it’s generated. Then, we can, of course, save that image inside of our database. We will need to, however, update the type of way we’re going to be sending this API call. Similar to what we mentioned with our “Use as action” here, we’re not going to be pulling data from a service, we’re going to be posting data to a service. So, we’re going to select this “Post” option.

When we’re posting data, we need to know where to send that. And the best way to explain that is kind of like in a real-world situation. If you were to send a letter to our home address, you would need our address. So, let’s imagine our name is ‘DALL-E 3’, and you want to mail me a prompt. When we receive that letter, we open it, view that prompt, draw you a beautiful picture that looks exactly like your prompt, and then send it back to you. 

So, in order for me to receive that prompt, you need to know our address. And thankfully, this one is super straightforward to grab. All you need to do is open up your OpenAI ‘Documentation’, and if you look at the very first line of all of this code here, the URL that they provide is the actual ‘Address’. So, we’re just going to highlight this, make a copy of it, jump back into Bubble, and paste that in.

And look at this point, we are making great progress, and we’re almost there! If we look at the OpenAI documentation though, we can just see that inside of the header value, it also has something known as the ‘Content-type’. So, this is just referring to the formatting in which we need to be able to send data through to the OpenAI service, and in this case, they want it to be ‘JSON’. 

So, we’re going to make a copy of this ‘header’ and add it to the header of this specific ‘API call’. So, we’re going to highlight the words “Content-Type”, we’re going to make a copy of those, we’re going to jump back into Bubble here, and as you’ll see, we now have the option to ‘Add Header’ for this specific API call, similar to how you could add a header inside of the overall API key. But what you just need to remember is that if you were to add the header into the overall API key, any service you add inside of that overall API is going to have that exact same header setting applied to it.

Now, while that might be great for things like our API key because every single time we connect to OpenAI, we’re going to be using the exact same API key. When it comes to things like the ‘Content-type’ though, each OpenAI service has a different ‘Content-type’. So, for instance, if you were sending an image through to ‘DALL-E 3’, you’re going to be using ‘JSON’. However, if you wanted to create a separate API call and connect to something like the Whisper model, which is their ‘speech-to-text’ model, you’re going to need to send through an audio file, so the content type is going to be different. So, that’s why we personally just like to create the ‘Content-type’ on the header for each individual API call.

So, that’s what we’re going to do right now. We’re going to add our header, and for the key value, we’re going to paste in the ‘Content-type’ exactly the same way it was spelled, with two capitals and no space, instead, it has a dash. We then need to jump back into our ‘Documentation’ and copy across the value that it wants. So, “Application/JSON”. For our value, we’ll add that in.

Now, we are going to make sure that this is selected as ‘Private’, and this just means that we can’t change this within our workflow in a moment whenever we want to call this. So, this will permanently be this value, and look, we’re completely okay with that because as we said, this is not going to change.

Now, one of the very last things we need to build out here is all of the parameters that we’re going to send through with each individual API call. So, what on Earth are ‘Parameters’?

Parameters are essentially just a fancy way of saying that these are the bits of data we’re going to send through to our ‘DALL-E 3’ model. And if you notice inside of our documentation here, we have a couple of different parameters. So, there’s the model, there’s the actual prompt that someone’s going to type in, there’s the number of images we should generate, there’s the size of the image. 

That’s four bits of data that we just need to identify when we send our call through to OpenAI.

So, what we’re going to do is actually copy all of these parameters across. It’s nice and straightforward. We’re just going to copy from the open bracket to the closed bracket (), jump back into Bubble, and if we scroll down, we’re going to paste these inside of the input field “Body”. And just like that, that is how you can create your very first API call.

So, at this point in time, if you were to initialize this call, what you’ll see is that you’ll be able to successfully make a connection. If you have created a successful connection, you’ll see this pop up here “Returned values – DALL-E3”, and this just means that it’s going to match all of the different parameters with a different type of data that can be stored inside of your app. 

But look, there are a few changes we’re going to make before we save this. So, we’re just going to quickly hit cancel because right now, every single time we create and send an API call through, our parameters currently have static values. And of course, if there’s one thing you probably learned throughout your time building in Bubble, it is the difference between static and dynamic values. 

Static values essentially just mean we’re sending through these exact values every single time the API is referenced. So, every time we connect to ‘DALL-E 3’, it’s going to send through the exact same prompt, which is “a white Siamese cat”. So, it’s going to generate the exact same image, and that image is going to be the exact same size.

Now, for our end users today, that is not the experience we want to create. We want to give our users the ability to type in their own custom prompts as well as select their own custom dimensions for this particular image that they want to generate. So, how can we allow them to do this? 

What you can do when you’re working with APIs is create a dynamic parameter. And as you’ll see above this input field, Bubble actually lays out how you can do this. So, all you have to do is add one of the open triangle brackets here “<>”, or we guess you could say the “greater than or less than” symbol, and that will then become a dynamic value.

So, for our prompt here, what we’re going to do is highlight the static text, which is “a white Siamese cat”, and we’re going to choose to replace that with an “open triangle”, or we guess you could say the “less than” symbol, and then we’re just going to assign a name to this dynamic value. We’re going to call this “The Prompt”, and then we’re going to add in the “greater than” symbol. 

What this now means is that we can replace the word “prompt” here with an actual prompt that a user types inside of the interface of our app, which we’ll show you in a moment. But as you’ll see when we click away here, it’s going to verify that we now have a dynamic parameter. But while you’re testing and building your application, you will just need to add a dummy ‘Value’ into this field here. So, we’re just going to paste in the initial value that OpenAI provided, which was “a white Siamese cat”. 

But by all means, this doesn’t mean that this is going to be static. We’ll be able to replace this text in a workflow in a moment.

We will need to unselect that this should be a ‘Private’ field. If you don’t unselect that this should be ‘Private’, you won’t be able to make changes to this dynamic value inside of workflow actions, so please just take the time to do that right now.

The only other thing we’d like to do is just allow our users to determine what size of images they should generate. So, OpenAI actually provides you with three different dimensions that you can use, and if you go to the documentation page, you’ll see those here. So, you can generate images in ‘1024×1024’, there’s ‘1024×1792’, and then there’s ‘1792×1024’. 

They don’t give you too many options, but look, we just wanted to take the time to explain how you could allow your users to select their dimensions inside of our tutorial today. So, what we’re going to do is also update the static value here to be a dynamic parameter. So, that means we’re going to replace the static value with a ‘less than’ symbol, and we’re going to call this parameter “size”, and then we’ll close it off with the ‘greater than’ symbol. 

And if we click away, as you’ll see, it’s going to generate a space to create a dynamic parameter. And of course, we’ll just need to add a test ‘Value’ into this field. So, once again, we’re going to set that as ‘1024 by 1024’. And similar to before, we’re also going to uncheck that this should be ‘Private’.

And that is everything we need to change here. Now, one thing you might also notice is that we’re not creating a dynamic parameter for every single field here. A great reason why is that when it comes to things like the model, as you can see, we’re referencing the ‘DALL-E 3’ model. That value is always going to be the same. It’s not going to be dynamic, or we don’t want our users to be able to select a different model. We only ever want to reference this particular model, so this value will, in fact, be static. Same with the number of images we’re going to generate. We only want to generate one at a time.

Now, look, after building all of this out, we’re going to ‘Reinitialize call’ here. As you can see, our API call has been successfully initialized. So, we’re going to choose to ‘Save’ this here. Now, something we should just quickly point out is that if you were to get an error message during this process, OpenAI will normally tell you why that error message has occurred. But a common error message might just be that you don’t have enough billing credits on your OpenAI account. 

If you see that message, all you’ll need to do is take the time to add some credit to your account. So, you could add, let’s say, 5 or $10 to it. But if you have a brand new account and you haven’t taken the time to do that, you’re more than likely going to receive that error. So, please don’t stress. All you have to do is just go and purchase some OpenAI credits. 

From here though, what we want to do is just jump back into our Notion checklist, and we’re going to tick off that we finished creating our API call. And if we can be completely honest with you, that concludes the hard part of our tutorial today.

4. Designing the UI of your Bubble app

This is where we get to review the UI design and build out the workflows that are going to power this entire feature. So, this part is going to look incredibly familiar to you if you have been using Bubble for a little bit. What we’re going to do is jump back into our Bubble editor, but we’re going to open up our design tab.

Now, in our ‘Design tab’, this is the application that we previously showed you in the demo at the start of this tutorial. So, as we mentioned before, we just created something like a small blogging platform, and for every single blog that someone shares, they can add a title, a body, as well as a thumbnail image. So, these, of course, are just standard text input fields. And then we’ve got a picture upload loader here.

But as you can see below our picture uploader, we also have this text that just prompts someone that if they don’t have an image to upload, they can generate a custom thumbnail. Now, when someone clicks on this text, we have a hidden group that’s going to be displayed below that. And so, this is the group where users can add in values for those dynamic parameters that we just built out within our API call.

So, for the very first field, it’s just going to be a basic text field, or we should say an input field, and this, of course, is where someone can type out their custom prompt. Below this, we have a drop-down menu that displays all of the dimensions for an image that someone can select from.

Now, if you remember inside of our ‘OpenAI documentation’, there were three different types of dimensions DALL-E 3 currently doesn’t support any other dimensions at this point, so you’ll just need to copy across each of these three individual dimensions into your drop-down menu. So, you can see here we have all three versions pasted in, and they’re all on an individual line.

  • 1024×1024
  • 1024×1792
  • 1792×1024 

5. Building the workflows that power the feature

I’ve kind of just skimmed over how we’ve designed this page because, to be honest, that’s not really important. We’re sure you built out your own platform or specific use case, and you’ve already taken care of the design process or the design aspect of your build. What we’re interested in showing you though is how we can build out the workflow to generate a custom AI image using DALL-E 3.

So, when this “Generate” button is clicked here, we’re going to create a brand new workflow. Now, please ignore all of the existing workflows on this page. These are just used to display things like our hidden group. They’re not really relevant right now. But inside of this workflow, the first thing we’re going to do is head on down to our ‘Plugin’s’ actions. And what you’ll now see is the option to reference our API call. So, this is where we can connect to our OpenAI API and use the DALL-E 3 API call. And as you’ll see, because we’ve added two dynamic parameters, this is where we can change those values.

So, when it comes to the prompt and the size of this image, we’ve, of course, given two input fields on our page to our users that they can add custom values in. So, all we need to do is reference those input fields. So, we’re just going to delete this static text here, and for our ‘Prompt’, we’re going to choose to ‘Insert dynamic data’ and we’re just going to reference our multi-line input or our standard “Input-Prompt” value. So, that’s just where someone’s going to type in their custom prompt. Then for the ‘size’ of this image, we’ll ‘Insert dynamic data’ and just reference our ‘drop-down image dimensions’ value’.

Now, at this point in time in our workflow, Bubble’s going to send this information through to DALL-E 3. DALL-E 3 is going to generate an image and it will send that image back to Bubble. But we need to be able to do something with that image. And so, when it comes to our blogging platform here, look, there are multiple different ways in which you can save that image. But at this point in time, in our specific use case, and look, this might be different to yours, we don’t actually want to save this image in our database right away.

Instead, we’d like to just display a preview of this image inside of our picture uploader. Now, we’re sure that in your own application, you might actually want to save that image directly in your database, and look, we’ll show you how to do that in a moment. However because our blog post does not yet exist in our database, we can’t attach it to an existing thing or an existing entry. So, what we need to do is just store it in a custom state on our page and then display a preview of it inside of our picture uploader. Now, we apologize if you’re not familiar with custom states, but we’re not here to teach you that today. We have a dedicated tutorial that covers that. But a quick 101:

 “A custom state is just a way to temporarily store data on your page without having to store it in your database”,

Which is exactly what we want to do today because our blog post does not exist in our database. We need to store it on our page temporarily until we actually create that blog post.

So, what you’ll see is that if we double-click on our overall page, our page is called DALL-E, if we open up our ‘Element inspector’, what you’ll notice is that we have an existing ‘Custom states’, and this is called “generated-image”, and of course, it’s just an image here. So, what we want to do after we generate an image is store that image in our custom state. So, if we go to our workflow tab after we’ve generated the image, we’re going to type in the word “State” and set the state of an element. The ‘Element’ is going to be our overall ‘DALL-E’ page, and the ‘Custom state’ is going to be our ‘generated-image’. Then, from here, we’d like to reference the image that was generated in step one of our workflow. So, we’re going to reference ‘The result of step one (OpenAI DALL-E 3), its data, and the very ‘first item’ within that data that it has generated, and we’re going to pull ‘the URL’ of the image that was generated.

Now, after setting this custom state, what we’re also then going to do is make sure it can be displayed inside of our picture uploader. So, what you’ll see is that when we double-click on this picture uploader, we’re referencing that custom state and we’re just displaying that image as the ‘dynamic value’ here. So, once there is a picture stored in our custom state, it will be shown as a preview. 

Then, finally, inside of our workflow, there are two additional steps we’d like to add. If this hidden group below the image is being displayed, we’d like to hide it, and then we’d like to reset the input fields. So, if we just jump back to our ‘Workflow’ tab, we’re going to choose to ‘Toggle an element’ that will be our ‘group’, and then we’re just going to choose to ‘Reset the input fields’ of that group. Now, we’ve rushed over those because they’re not super important to our process today, but at this point in time here, we would, in fact, be able to generate our very own AI image.

But before we go ahead and run a preview of that, we just want to quickly jump back to our Notion checklist and just tick off that we’ve not only finished evaluating how we’re going to design our app but also how we’re going to build out the workflows that will power this feature. The very last thing we want to do is just explain to you how you can now save that image in your database. So, if you jump back into Bubble here, what you’ll see is that we have a separate workflow that’s going to run when the “Publish Post” button is clicked. So, let’s say someone’s taken the time to add in a title, or a body, and they have, in fact, generated a custom AI image using DALL-E 3. 

When they’re ready to publish this post, we’ve already created a workflow. What you’ll see is that this is going to be pretty straightforward. There’s nothing too fancy about this workflow. All we’re doing is creating a new entry in our database. So, we’re creating a Blog, and inside of this blog, we just have three bits of information. In fact, if we quickly just digress and open up our ‘Data’ tab, we’ll show you what this data type is going to look like.

Under our blog, there are three data fields. There’s 

  • The body, 
  • There’s the thumbnail image, which is an image, 
  • And then there’s the title. 

So, it’s nothing complex. Over in our workflow, then, what we’re doing is just obviously matching all of the input fields on our page with the relevant data field. But when it comes to our image, we’re just referencing the value of our picture uploader, and of course, that picture uploader is getting its value from our custom state. So, the process of actually saving an image generated by DALL-E 3 is the exact same as where you would save an image using a standard picture upload. 

Then, from here, we just have an additional step which is going to send me through to another page in our app where we can view a list of all of our blog posts. But that is it. It is truly that simple. What we’d love to do now, though, is just run a quick preview of this application so you can once again see how it’s going to function with everything that we’ve just built out.

Over in a preview of our app, we’re just going to once again create a blog post once again. Now, when it comes to the title of this post, we’re going to call this 

“Why Optimus Prime is taking an early retirement (you won’t believe it)”. 

Super scandalous, we know, but look, you’ve got to give the people what they want, and they want drama about Transformers. Then, when it comes to our body text, we’re just going to paste in some dummy text here. 

And then, of course, for our thumbnail image, we don’t actually have an image of “Optimus Prime” to upload. What a shame. If only we could generate an image using DALL-E 3, that’s exactly what we’re going to do. 

We’re going to select this text, it’s going to open up our dynamic fields, and we’re going to paste in the exact same prompt that we had used in the initial preview of our tutorial. So, it is 

“A photorealistic image of the Transformer Optimus Prime sitting on a cruise ship, drinking a cocktail, surrounding him with fans wanting his autograph and taking pictures”. 

For our image size, once again, we’d like this to be a perfect square, so we’re just going to set this as ‘1024 by 1024’. We’re going to select ‘Generate’, it’s going to run that workflow and send this dynamic value through to DALL-E 3. It’s then going to generate an image and send that back, and look, it has just generated that image for me. 

Now, from here, we’re going to choose to ‘Publish post’. It’s going to run that workflow, save that data in our database, and then send me through to a page where we can view that particular blog post. Here, and just like that, that is absolutely everything we wanted to cover within this tutorial. 

So, we’re just going to quickly jump back to our Notion checklist and tick off the very last thing, which was learning how we could save an image into our database. And just like that, you now have a fully functional integration with OpenAI’s DALL-E 3 model.

You now know how to integrate DALL-E 3 directly inside your Bubble app to generate AI-powered images. As you can see, the whole process wasn’t too complex. It’s nothing that we couldn’t handle inside of Bubble.

Never miss a course 👇