AWS Lex Chatbots
An Overview of Building a chatbot on AWS and the Tools Required

What is Amazon Lex? Lex is a framework built by Amazon that helps you build, configure and deploy chat-based applications very quickly.
You can think of Lex as a custom REST API application running on a shared Amazon server, that has a simple yet secure set of instructions for interacting with a client application or any other AWS services. Using Amazons definition, Lex is a “Service for building conversational interfaces into any application using voice and text”.
AWS has a very specific definition of “service” inside their own ecosystem, which is essentially any process that will consume server resources on AWS. EC2, Lambda, DynamoDB, and even S3 are all, at the end of the day, processes that run on AWS shared servers with specific APIs that allow them to be linked together.

The image above is one of my favorites, describing the most basic architecture you can use to set up Lex. There are SO MANY more advanced architectures possible on AWS, none of which I will get into here, but you should start from the basics if you are just starting with your first chatbot project.
Building with Lex
Building a chatbot on AWS is simultaneously incredibly easy and incredibly difficult.
The easy part: Configuring the chatbot API logic inside of the simulated environment that Amazon provides you.
The hard part: Connecting your Lex chatbot with other AWS services and with a user-facing website, and learning about AWS infrastructure.
Disclaimer: I did not have a lot of background in AWS when I started this project, so I had to spend extra time and effort to really understand AWS infrastructure concepts.
That being said I do think objectively that configuring AWS services to work together is more difficult than it is to build out a small demo of any of those individual services themselves.
Building a Chatbot in the chatbot builder UI
You can quite easily build the set of inputs and responses that your chatbot will respond to. The bot framework provided by amazon revolves around what are called ‘intents’, or otherwise user goals that are sensed by the chatbot as it is receiving input from the user.
If the chatbot senses that the user wants to, say, place an order for takeout through your bot, then it will begin the process of collecting data about that order — the items for delivery, customer information, address, etc.

In the image above you can see the ‘Intent’ is at the top of the page, “BookHotel”. Below that are the questions that the bot will be triggered to ask the user, if it senses that the user wants to book a hotel. “Great, glad that you want to book a hotel” it will say. “What city will you be staying in?”, and so on and so forth.
If the user switches direction and then begins asking the bot for information about the company itself, the bot can recognize the change of intent and switch to an information providing task.
You can learn more about how the bot builder works with this introductory guide here, and then follow up with more detailed information with the documentation by the folks at Amazon. If your brain works anything like mine then you will be able to pick this piece up fairly quickly (in a matter of a few days of dedicated learning).
Connecting Lex to other AWS services to build your Application
Here is where things get a bit more tricky. The strength of AWS is in its flexibility, modularity and security, but that makes the infrastructure around a project like this a bit more complicated to set up. Let’s take another look at our trusty architecture diagram.

Based on the above, We’ll need at the very least, an EC2-hosted application to talk to the Lex chatbot. That seems simple-ish, but what about the Lambda function? That seems like it has a role to play in the handling of chatbot information, but what is that role exactly? Where does Cognito fit in? Why do we need Cognito in this picture anyways? To understand a bit better, let’s dive into the application request flow.
- User makes a request to the web app server
- Server makes a request to Cognito
- Cognito checks the authentication status of the user, and returns a credentials object with the allowed access of that user. If you have not allowed unauthenticated access, the Cognito service will try to redirect the user to a login page to obtain that authentication. Note: as far as I can tell there is no way to spoof this credentials object for the sake of testing.
- User inputs text or audio into your application or and presses ‘send’.
- Your application calls the Lex API (API URLs can be found here) with the input text/audio content, and an accompanying OPTIONS request that confirms the access rights of the user that recieved the credentials when your app was initialized.
- The Lex bot will take the text input and perform the logic that you built into the bot in the bot builder UI.
- (optional) The Lex service can optionally call a lambda function as part of each call to the Lex API. In this model lex will handle the output of the lambda function and then relay that back as the response, instead of building its own response object to send to the client.
- Steps 4–7 repeat until the intent is fulfilled
- Finally, once an intent is complete, fulfilled, the Lex chatbot can perform a special action called ‘fulfillment’ in the documentation.
So you’ll need to configure a Cognito service to work with your application, as well as the front-end application and the Lex chatbot. Then if you want to process the data collected by the chatbot you will need to create a lambda function to handle that processing. If you want to then analyze that data you can do so with a cloudwatch service which is built into the Lex chatbot itself and requires pretty minimal configuration to get running.
Thanks for reading!