AWS Pipeline Plugin for Jenkins 2.x

I found this really cool plugin last week so I thought I’d make a post out of it.

I heavily use the “AssumeRole” capability within my application. Previously, this is what I used:

As you can see, the code above is using shell commands to achieve the assume role awesomeness. This works well when your jenkins job has a shell command step but not when you have a groovy pipeline defined in your Jenkins.

I struggled with it initially – one of the ideas I had was to upload the assume script somewhere like S3 and then pull it down and run it when I wanted to run commands under the assume-role. However, this felt a bit cumbersome.

Next thing in my mind was to write a groovy plugin that can do this for me. However, rather than reinventing the wheel, I started looking for existing solutions. Finally, I found the aws-pipeline-plugin.

Its a neat little plugin that allows you to do a bunch of basic stuff that you might want to do on AWS. Assume role is one of them.

So now with the new plugin, my code reduced to:

Here, I’ve got a couple of variables but their names should be self-explanatory of their purpose. The general idea is that anything you write within that withAWS block will get executed under the role specified in role variable.

Blue-green deployments on AWS

Before we do this, make sure that you have a service that you want to deploy. To keep things simple, I followed the spring boot tutorial on making a restful web service. It was quick and the app worked like a charm. As usual, I went a bit extra and made my app return a stubbed list of users. You don’t have to. Make sure you have a /healthcheck endpoint and another endpoint that you can test with. In my case, I have /users which returns a list of users.

All righty then. Lets get a high level overview of what things are and how they are going to work. But before we do that, lets go through a quick real-ish life scenario.

Say you have a service that you have deployed onto AWS. Now you have a newer version of that service that you’d like to test. Since you never know if something works without actually trying it out, normally, after exhaustive testing in staging and other environments, you’d deploy that service into production to all your users. But ah ha! That one guy in your team forgot that one test case which made it blow up which means every single user of yours is now seeing error pages everywhere. This is bad so you roll it back to the previous version. Doesn’t sound too bad yet but by the time you do this, you’d have lost a couple of hours in time which would translate into actual money lost to the company which could eventually make a dent in your end of the year bonus.

Continue reading

AWS cloud formation describe* permission error

So you know sometimes, its difficult to work with AWS cloud formation scripts. On surface, errors are seemingly random and unrelated to the script. This is what happened today. I wrote an awesome parameterised cloud formation script. I was quite proud of it, mainly because most of my parameters were typed. This mean that the parameters like, ec2_security_groups had a type of List<AWS::EC2::SecurityGroup::Id>. This not only makes it easier to work with that cloud formation script from the AWS console, but also makes it very easy to work with from a CI/CD pipeline as if those parameters are invalid, the script will fail instantly, instead of waiting for resources to deploy.

However, while doing this, I completely missed the fact that in order for cloud formation to validate your input parameters, it needs to look them up first. What IAM policy permission does a resource lookup need? describe!

For once, my IAM role had tightly restricted permissions and because of this I had to go on to expand them slightly to allow for describe permissions.

I kept getting this error about the role not having the describe permission and I kept wondering why it needed that. Well now you know too!

Notes from AWS Developer Training (Day Three)

Creating Serverless Projects

In a server less environment, Amazon Lambda can be used in conjunction with Amazon API Gateway for HTTP interfacing, Amazon S3 for storage, Amazon ElastiCache for caching and DynamoDB/RDS for database storage. Checkout the Servless Application Framework at serverless.com for more info.

Securing data in AWS

Infrastructure should be treated as code, I.e. Version control systems. Automate security and increase testing frequency via CI/CD. Fail early and fast. Test at production scale. No need to keep the test servers alive. Spin up the entire production environment in test, deploy the code, run the tests and then tear down the environment. Continue reading

Notes from AWS Developer Training (Day Two)

Achieving loose coupling with Events

Amazon SQS, SNS, DynamoDB Streams, Kinesis Streams, Lambda.

With the event driven architecture, two systems don’t need to know about each other. Each of them can fire events while the other responds to that specific event.

SNS has publish/subscribe model. When publisher pushes, all subscribers immediately get the message. This can be email, SMS, SQS, Lambda etc.

SQS queuing for delivery method. Messages are persisted until they are polled. Extremely scalable. Can potentially contain millions of messages. Continue reading