So a few years ago I began experimenting with building skills on the amazon alexa platform. I found the developer experience to be top-notch and the sdk’s provided to be easy to use. I created two skills name phill and joe. From my understanding developing a skill on the Alexa platform consists of 3 basic components.
Intents are the VUI (voice user interface) equivalent of a software interface. They indicate the typical list of features and functionality to wish to accomplish from your Alexa Skill.
The above are the intents I had assigned to the skill Joe. Which are to perform sentiment analysis, Say a greeting, Send a text message, validate a Two-Factor Authentication code and perform a secure action.
Utterances are like the implementations of the above intents. They are essentially the product of applying context to intents to make them easier to understand and implement. Think of them as test cases for human interaction with our skill.
The above are utterances which basically train alexa to understand which intent to pickup when receiving a particular type of input.
The Actual Skill itself:
I made use of different client libraries for all the various intents I wanted to accomplish. Such as twilio for sending text messages. speakeasy for two-fa and a simple sentiment analyzer. Lambda allows for configuring environment variables which contained all my configurations in a separate env file that could be uploaded directly to aws.
Scope for improvement:
Try not to write a blog post about something you built 3 years ago. 😛
Setting up CI/CD to simplify the development process
Continue to have fun with whatever you are planning to achieve.
Hey everyone, I know it’s been a while I have been busy working on a bunch of exciting stuff to keep you guys entertained. This post is about a fun approach to solving a common problem we face. Wouldn’t it be nice if you could summon everyone at the press of a button. Well now you can using an AWS IoT Button. The original ones were given away for free at an AWS ReInvent event some time ago. The IoT button is basically a Dash button that is unlocked, programmable and 3 times more expensive.
Leaving all the business reasons aside. The IoT button comes with a non-removable battery and is good for only about 2000 clicks. Amazon definitely has everything figured out it took me just 5 minutes to get up and running. I just unboxed the button downloaded the app on my phone. Connected to the Wi-Fi, logged into my AWS account and next thing you know I could see all my lambda functions and API gateway endpoints that could be assigned to it. So the ideal use case for me was to invite everyone in the household for a given meal. It is normally a hassle to coordinate with everyone and try to work things out this project simplifies it down to a single click. This is what the solution architecture looks like.
Its actually really cool so when the button is clicked there is a small led indicator on the IoT button that blinks white when triggered and keeps blinking for a while. It eventually turns either green (success) or red (failure) depending upon the outcome of the Lambda execution. The result being a text message sent from my Twilio number which is as follows.
The size of this project is quite small but it has proven to be very useful. That being said there is always scope for improvement.
Add scheduling to determine who is availble on weekdays/weekends for which meal
Incorporate the ability to handle a response from to user to confirm/decline their availibility
Incase text message delivery fails have an alternate notification mechanism such as email
Hope you guys enjoyed this post. Expect more exciting posts to ECHO in the future. 😉
We have all heard of IaaS, SaaS and PaaS offerings. But I recently came across AWS Lambda, which I would like to define as RaaS (Runtime as a Service). So what AWS lambda provides is an execution environment for running NodeJS and Python code in a completely Serverless/Stateless manner. which means all we have to do is write code that conforms to the specs of a lambda function and let AWS handle the scaling and execution of the code. The good news being that it integrates well with many AWS services and you get billed only for the compute time you use. You can trigger a Lambda function in response to many events such as an S3 upload or a change in a Code Commit repository. Since I wanted to play around with this technology I thought of building a slash command that suggests gifs for a given term. The solution architecture for this application would be as follows.
So the sequence of events is as follows.
When we type in /gifsuggest "something" slack makes a POST request to our app, providing us with lots of information such as the user, team, channel etc. They also provide a response_url whos purpose I will explain later.
Using the nginx config on my VPS I redirect the request to a containerized express app.
Slack user experience guidelines enforce a rule which states that a response must be made in 3000ms or else the command is considered a failure. Which is why at this time I just send some placeholder text immediately to slack.
The next step is we POST the search term and the response url to an API Gateway endpoint.
The API Gateway is what triggers the execution of our lambda function.
The lambda function GETs gifs using the giphy api.
Finally using the response_url from earlier we POST the gifs and create a slack message. Slack allows us to use the same response url to create 5 messages in half an hour.
Below is the slash command in action.
Now let’s get into the code. There are many moving parts in this application so I will show each microservice in the order of which they execute. The first is the express app that quickly replies to slack and then starts the lambda function. I have called this service slack-lb.
This app is run in an alpine Linux docker container with a nodejs environment.
The next microservice is the Lambda function itself.
Event: This contains the data that is being passed into the invocation.
Context: This contains the lifecycle information of the lambda function such as execution time remaining and other lifecycle hooks.
Amazon is nice enough to provide a NodeJS library that registers the AWS lambda context as an Express middleware. This makes migrating existing express apps to Lambda very easy. We just need to write a handler as follows.
and then register middleware in our express app.
To deploy the lambda function all we need to do is create a zip file with the node modules, the handler and the express app files and then upload them on AWS.
Benefits of Lamda:
No need to manage any infrastructure.
Easy monitoring using cloudwatch.
Issues with Lambda:
Currently only supports NodeJS 4.3
Only supports stateless applications
No support for sending files
Scope for improving this project:
Right now once gifs are sent the user still has to copy the url and paste it at the destination there should be a way to forward the gifs using message buttons.
The app is using a public api key for giphy which is rate limited and not peak performance. Must get a production key
Migrate the proxy slack-lb app into its own lambda function.
PS: A lot of people make this mistake but GIF stands for Graphical Interchangeable Format. which would mean that it is gif and not jiff.
P.P.S: Like everything I do this is opensource. Feel free to contribute