Sorry for the lack of updates, I have been working on something so awesome it should technically be 3 blog posts and not one. It was such an intense project that I ended up bricking one of my Raspberry PIs by corrupting the memory card and causing segmentation faults. The entire fiasco is also what slowed down my progress. But anyways to start off this new year I wanted to shift my focus on upcoming and bleeding edge technologies like OpenCV. The overall idea is to find the most dominant color in a given frame so that if something was to remain camouflaged it would have the best chances with the chosen color. To implement this I used K-means clustering to divide the image into two sections and determine which color occupied the most space. The efficiency of this algorithm improves as we increase the value of K (the number of clusters). But for the sake of speed I chose to use only 2 clusters. Here is what the algorithm looks like
Capture video using RPI camera
Stream the video as a supported format MJPEG
Load the video into OpenCV
Process every frame as a Numpy Array
Reduce the size of the Image for easier computation
Using K Means Cluster create a histogram with K sections
Determine largest section in histogram
Render color on 8×8 LED Grid
The solution architecture is as follows
At first, I tried to everything using only my 2 raspberry pi’s but the problems I face was that it took 14 hours to compile! and the performance was incredibly poor. So I thought it was best to delegate the responsibilities to a container in the cloud which was very easy to setup and configure. They are 3 main components in the system.
So after installing the MJPEG streaming module on my Pi2 I wrote a simple wrapper shell script for it.
This would create a MJPEG stream at 'http://< rpi-ip >:8080/?action=stream'
The next step was to consume this stream in AWS. I created a simple base container using the anaconda framework for python. setting OpenCV was as easy as conda install opencv . Next is the meat of the project code for which is shared below.
So this is what the EC2 container sees.
And this is the histogram generated after K Means clustering.
As you can see Red seems to be the most dominant color in the frame. You can tell by the amount of time taken for the neural network to compute the dominant color that this project is in an infancy stage. Let me mention the scope for improvement for this project.
It is fundamentally wrong to use a value of k=2, I need k to be the exact number of different colors
To provide the color for the LED board I should use a pub-sub system instead of REST as acknowledgment of request is not necessary
In order to achieve true camouflage only computing to colour is not enough I need to figure out patterns and textures
Overall performance of the system must improve by using a distributed system approach like (MPI) or tweaking the algorithm
Hope you guys liked my project. Look forward to more bleeding edge projects in the year ahead
We have all heard of IaaS, SaaS and PaaS offerings. But I recently came across AWS Lambda, which I would like to define as RaaS (Runtime as a Service). So what AWS lambda provides is an execution environment for running NodeJS and Python code in a completely Serverless/Stateless manner. which means all we have to do is write code that conforms to the specs of a lambda function and let AWS handle the scaling and execution of the code. The good news being that it integrates well with many AWS services and you get billed only for the compute time you use. You can trigger a Lambda function in response to many events such as an S3 upload or a change in a Code Commit repository. Since I wanted to play around with this technology I thought of building a slash command that suggests gifs for a given term. The solution architecture for this application would be as follows.
So the sequence of events is as follows.
When we type in /gifsuggest "something" slack makes a POST request to our app, providing us with lots of information such as the user, team, channel etc. They also provide a response_url whos purpose I will explain later.
Using the nginx config on my VPS I redirect the request to a containerized express app.
Slack user experience guidelines enforce a rule which states that a response must be made in 3000ms or else the command is considered a failure. Which is why at this time I just send some placeholder text immediately to slack.
The next step is we POST the search term and the response url to an API Gateway endpoint.
The API Gateway is what triggers the execution of our lambda function.
The lambda function GETs gifs using the giphy api.
Finally using the response_url from earlier we POST the gifs and create a slack message. Slack allows us to use the same response url to create 5 messages in half an hour.
Below is the slash command in action.
Now let’s get into the code. There are many moving parts in this application so I will show each microservice in the order of which they execute. The first is the express app that quickly replies to slack and then starts the lambda function. I have called this service slack-lb.
This app is run in an alpine Linux docker container with a nodejs environment.
The next microservice is the Lambda function itself.
Event: This contains the data that is being passed into the invocation.
Context: This contains the lifecycle information of the lambda function such as execution time remaining and other lifecycle hooks.
Amazon is nice enough to provide a NodeJS library that registers the AWS lambda context as an Express middleware. This makes migrating existing express apps to Lambda very easy. We just need to write a handler as follows.
and then register middleware in our express app.
To deploy the lambda function all we need to do is create a zip file with the node modules, the handler and the express app files and then upload them on AWS.
Benefits of Lamda:
No need to manage any infrastructure.
Easy monitoring using cloudwatch.
Issues with Lambda:
Currently only supports NodeJS 4.3
Only supports stateless applications
No support for sending files
Scope for improving this project:
Right now once gifs are sent the user still has to copy the url and paste it at the destination there should be a way to forward the gifs using message buttons.
The app is using a public api key for giphy which is rate limited and not peak performance. Must get a production key
Migrate the proxy slack-lb app into its own lambda function.
PS: A lot of people make this mistake but GIF stands for Graphical Interchangeable Format. which would mean that it is gif and not jiff.
P.P.S: Like everything I do this is opensource. Feel free to contribute
I received a very fortunate gift recently a B1248 LED badge. The led badge came with support software that ran only on windows and worked fairly well. However, given my love for engineering, I began to look around for ways to program it and gain complete control over it. I stumbled upon a fantastic library. This worked almost completely out of the box on my Raspberry Pi 3. However merely implementing something someone else has developed is more of an operations task. Me being on the development side of things thought of ways to improve it and I came up with this.
I build a simple flask app around it and gave it a REST interface. Sample code for which can be found here. I am always open to pull requests and public contributions. However, building a REST API wasn’t enough for me so I went ahead and ‘Containerized’ the app meaning that we would have to ironically use ‘-v’ during ‘docker run’ to mount a port. This REST API can be used to transmit very useful and critical information such as the example given below.
The original idea was to monitor all my VPS’s and check for downtime. However the library I use doesn’t support multi line text, which makes it not very useful to have lots of text in a marquee. It would also be really nice if this could show the current response time for all my API’s.
Scope for improvement:
Figure out multi-line text.
Separate the 2 processes into their own microservices.
Implement a queueing mechanism such as Kafka or RabbitMQ to read from MySql.
Further Extend the API to show either weather information or trending #tags.
Running docker on the raspberry pi is a fun process. Before I begin i should thank the Hypriot blog for making it possible and providing a great set of tutorials and guides. I own 2 Raspberry Pi’s (2 & 3) both running debian 8.0 jessie.
The instructions required to install docker on the RPi are pretty straight forward.
Do note that you need to be root to run docker commands so i suggest
To switch to root. The first thing I recommend doing is installing the Docker-UI container built specifically for the Raspberry Pi.
Do note since the Raspberry Pi is an ARM device the amount of supported containers are limited. If anywhere down the dependency chain of the container we come across a library that doesn’t support the ARM architecture the container cannot build/run. Which I find to be the major pain point in running docker on the Raspberry Pi. To install the Docker-UI rpi container we do the following.
docker pull hypriot/rpi-dockerui:latest #Pulls the latest image of the container. docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock hypriot/rpi-dockerui #This means run the container in detached mode (-d) expose port (-p) 9000 on host as 9000 in container (host port:container port),mount (-v) the local docker engine socket onto the container socket.
After which if you go to
http://`raspberry pi IP`:9000/
you should be greeted with the dockerui. I will explain how to set up nginx based reverse proxying in a different post. Here is whats running on my raspberry pi’s.
I am experimenting with separating Front-End and Back-End components onto different hosts. Current my RPI2 holds Front-End components and the RPI3 holds Back-End components.
Welcome to my tech blog where I share stories of my experiences and lessons learned during my software development.
Lets get started by me introducing me DEV infrastructure that is where I run all my applications in their beta phase.
The environment consists of
Digital Ocean Droplet:
My digital ocean droplet is running Ubuntu 14.04 and acts as the provisioning server that binds all the other servers into a OpenVPN subnet. The reason I have chosen to use a digital ocean droplet can be explained easily with this screenshot. All new nodes in my environment must connect to this VPN in order to be accessible from other nodes.
EC2 t2.micro Instance: There an EC2 instance which is my api gateway used to access api resources within any environment. This server runs NGINX and reverse proxies incoming requests on a service based port/url mapping. This server runs amazon linux which is a custom distribution.
Raspberry Pi2/ Raspberry Pi3: I am the proud owner or 2 raspberry pis which are the core of my dev infrastructure. They are both running the raspbian 8.0 jessie operating system but are mainly used as docker container hosts to deploy applications on.
Overall Architecture diagram:
This is what the overall architecture of the system looks like there with there being only one public access point in the entire system. All other components are not publicly accessible except for my openvpn provisioning server. This architecture allows for load balancing between both the raspberry pi’s using my NGINX config.