Serverless Framework – Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS – Part 2

Andrei Maksimov
Andrei Maksimov

In the previous article “Serverless Framework – Run your Kubernetes Workloads on Amazon EC2 Spot Instances with Amazon EKS – Part 1” we created a fully functional Kubernetes cluster backed by Spot instances using AWS EKS service.

In this article, we will accomplish automation of converting video files uploaded to the S3 bucket using Kubernetes jobs. We’ll run these jobs on top of Spot instances and trigger them from the AWS Lambda function on reaction to the S3 file upload event.

All sources available on my GitHub project.

Restoring EKS cluster

If you shutdown your Kubernetes cluster, it’s a good time to launch it again. All you need to do is to get source code we created in previous article from my GitHub project:

git clone
cd aws-eks-spot-instances-serverless-framework-demo

Since my first article publish date AWS significantly improved and simplified minions bootstrap process. You need to replace SpotNodeLaunchConfig: section to the following:

  Type: AWS::AutoScaling::LaunchConfiguration
    AssociatePublicIpAddress: true
    ImageId: ami-0a0b913ef3249b655
    InstanceType: m3.medium
      Ref: NodeInstanceProfile
    KeyName: 'Lenovo T410'
    # Maximum Spot instance price (not launch if more)
    SpotPrice: 1
      - Ref: NodeSecurityGroup
          - ''
          - - "#!/bin/bash -xe\n"
            - 'set -o xtrace'
            - "\n"
            - Fn::Join:
                - ' '
                - - '/etc/eks/'
                  - Ref: KubernetesCluster
            - "\n"
            - '/opt/aws/bin/cfn-signal -e $? '
            - '         --stack ${self:service}-${self:provider.stage} '
            - '         --resource NodeGroup '
            - '         --region ${self:provider.region}'

Additionally we will need to be able to get Lambda function execution role ARN. So, let’s add LambdaFunctionsRoleArn resource output:

  Description: 'Lambda Functions Role Arn'
    Fn::GetAtt: [IamRoleLambdaExecution, Arn]

Here IamRoleLambdaExecution is default Lambda Function execution role created by Serverless framework. You may find it’s declaration in .serverless/cloudformation-template-update-stack.json file inside the project folder.

Also, open serverless.yml file, change service: name (first line) a little bit to avoid error of non-unique S3 buckets creation and change KeyName: in SpotNodeLaunchConfig: section from ‘Lenovo T410’ to your SSH key name.

That should be enough to setup your personal Kubernetes cluster backed by AWS Spot instances:

sls deploy

The process will take a while. As soon as Kubernetes cluster deployment finishes, we need to create ~/.kube/config file, with the following command (please, use the latest awscli):

aws eks update-kubeconfig --name $(sls info --verbose | grep 'stack:' | awk '{split($0,a," "); print a[2]}')

Test, that kubectl is working by launching the following command:

kubectl get svc

You should see the following output:

kubernetes   ClusterIP           443/TCP   25m

Now we need to create a ConfigMap to allow Spot instances to connect to Kubernetes Master.

Download configuration example:

curl -O

Get Kubernetes cluster ARN using the following command:

sls info --verbose | grep KubernetesClusterNodesRoleArn

And paste returned value as a value to rolearn: in aws-auth-cm.yaml.


As you remember, we’ve added LambdaFunctionsRoleArn. We need also add it to aws-auth-cm.yaml file to allow kubectl authentication from Lambda Function.

Let’s get Lambda function role ARN:

sls info --verbose | grep LambdaFunctionsRoleArn

And add as additional rolearn: declaration to mapRoles::

- rolearn:
  username: lambda-user
    - system:masters

Deploy this config map to EKS Master as it is done usually in Kubernetes and remove the file, we’ll not need it anymore:

kubectl apply -f aws-auth-cm.yaml
rm aws-auth-cm.yaml

Now all we need to do is to redeploy the stack and see how our instances are joining to the cluster:

You may need to reboot\recreate already launched by Auto Scaling group instances to let them connect to the cluster. In our case it could be done manually:


Creating Lambda function

As an example we’ll take lambda-kubectl GitHub project. But instead of writing bash scripts, we’ll use serverless packaging feature.

Let’s structure our files a little bit and put our functions code in a separate folders. Add package: option to you serverless.yml file:

  individually: true

Create two folders for each of our functions and put function code .py files to this folders:

mkdir upload_thumbnail
mkdir upload_video
mv upload_thumbnail/
mv upload_video/

Next change handler: for each of our functions to have a name of it’s folder like so:

handler: upload_video/upload_video.handler


handler: upload_thumbnail/upload_thumbnail.handler

Let’s redeploy our stack to check that we made everything correctly:

sls deploy

Now, as soon our functions structured, let’s add kubectl and ~/.kube/config there.

You can find official link to current kubectl version for AWS EKS at their documentation. You need Linux binary.

mkdir bin
curl -o kubectl
mv kubectl bin/

Make kubectl executable:

chmod +x bin/kubectl

And kubectl configuration:

mkdir .kube
cp ~/.kube/config .kube/
chmod 644 .kube/config

Let’s create update our functions.

As kubectl is ~80 Mb of data and compressed function size is ~27 Mb, we’ll continue only with first function. You will be able to create the second one very easily by yourself at the end.

Let’s include necessary files and exclude everything not needed in our functions. To do so add package: option to the functions like so:

  handler: upload_video/upload_video.handler
      - upload_video/**
      - .kube/**
      - bin/**
      - ./**
    - s3:
        bucket: '${self:service}-${self:provider.stage}-uploads'
        event: s3:ObjectCreated:*
  handler: upload_thumbnail/upload_thumbnail.handler
      - upload_thumbnail/**
      - ./**
    - s3:
        bucket: '${self:service}-${self:provider.stage}-thumbnails'
        event: s3:ObjectCreated:*

Such declaration will assemble VideoUploadLambdaFunction function and put function source code, kubectl and connection config inside, so we could easily use them from our python code in Lambda function.

Serverless Framework - EKS Create Lambda Functions 3 - VideoUploadLambdaFunction Function Structure

At the same time we’re not including kubectl and connection config to the VideoThumbnailLambdaFunction function.

Let’s redeploy our stack now:

sls deploy

Now, when we have everything necessary in our function, let’s write some python code. Paste this code to upload_video/ file:

import logging
import os
import subprocess
import shutil

logger = logging.getLogger()

MY_PATH = os.path.dirname(os.path.realpath(__file__))
ROOT = os.path.abspath(os.path.join(MY_PATH, os.pardir))
DIST_KUBECTL = os.path.join(ROOT, 'bin/kubectl')
DIST_AUTHENTICATOR = os.path.join(ROOT, 'bin/aws-iam-authenticator')
KUBECTL = '/tmp/kubectl'
AUTHENTICATOR = '/tmp/aws-iam-authenticator'
KUBE_CONFIG = os.path.join(ROOT, '.kube/config')

def handler(event, context):

    bucket_name = event['Records'][0]['s3']['bucket']['name']
    file_key = event['Records'][0]['s3']['object']['key']'Reading {} from {}'.format(file_key, bucket_name))'Copying `kubectl` to /tmp to make it executable...')
    shutil.copyfile(DIST_KUBECTL, KUBECTL)
    shutil.copyfile(DIST_AUTHENTICATOR, AUTHENTICATOR)'Making `kubectl` executable...')
    os.chmod(KUBECTL, 0o755)'Now permissions are: {}'.format(oct(os.stat(KUBECTL).st_mode & 0o777)))'Making `aws-iam-authenticator` executable...')
    os.chmod(AUTHENTICATOR, 0o755)'Now permissions are: {}'.format(oct(os.stat(AUTHENTICATOR).st_mode & 0o777)))'Adding /tmp to PATH...')
    os.environ['PATH'] = '{}:/tmp'.format(os.environ['PATH'])'Testing `aws-iam-authenticator`...')
    cmd = 'aws-iam-authenticator token -i aws-eks-spot-serverless-demo-dev''Execute command: {}'.format(cmd))

    process = subprocess.Popen(

    out, err = process.communicate()
    errcode = process.returncode
        'Subprocess exited with code: {}. Output: "{}". Error: "{}"'.format(
            errcode, out, err

    job_description = """
apiVersion: batch/v1
kind: Job
  name: make-thumbnail
      - name: make-thumbnail
        image: rupakg/docker-ffmpeg-thumb
          - name: AWS_REGION
            value: us-east-1
          - name: INPUT_VIDEO_FILE_URL
          - name: OUTPUT_S3_PATH
            value: aws-eks-spot-serverless-demo-dev-thumbnails
          - name: OUTPUT_THUMBS_FILE_NAME
            value: {}
          - name: POSITION_TIME_DURATION
            value: 00:01
      restartPolicy: Never
  backoffLimit: 4


    cmd = 'kubectl --kubeconfig {} create -f -'.format(KUBE_CONFIG)'Trying to execute command: {}'.format(cmd))

    process = subprocess.Popen(

    out, err = process.communicate(input=job_description.encode())
    errcode = process.returncode
        'Subprocess exited with code: {}. Output: "{}". Error: "{}"'.format(
            errcode, out, err


Sure, there are a lot of things to improve here, but I wanted to show you the basic idea how we may use kubectl with EKS from Lambda Functions:

  • Put kubectl and aws-iam-authenticator to Lambda function
  • Put kubectl config also
  • Move kubectl and aws-iam-authenticator to /tmp folder inside Lambda function to be able to make them executable
  • Make them executable
  • Launch any Kubernetes command using kubectl from Lambda Function

And, yes, I did not build my personal Docker container, but used Rupak’s container (repo) from his article instead. As you can see, everything’s working.

Serverless Framework - EKS Lambda - Function Execution Result

Hope, you’ve already checked his article and I do not need to prove, that his solution working.

Cleaning up

To cleanup everything all you need to do is to run the following command to destroy the infrastructure:

sls remove

Future improvements

  • Using this approach you can launch just only one lambda function, as it’s name hardcoded in job_description variable. To overcome this “problem” you need either delete previously run function, either generate timestamp or id to make your function name unique.
  • Sure, we need to refactor the code a little bit (DRY principle)
  • Also, you may want to create a Thumbnails Lambda function which can do something with uploaded thumbnails.


Passing through both of my articles we’ve learned:

  • How to automatically create EKS cluster backed by cheap Spot Instances
  • What to do with Lambda Function to manage your EKS cluster

Hope, that article will be useful for you. If so, please, share or like it!

Stay tuned!

How useful was this post?

Click on a star to rate it!

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Want to be an author of another post?

We’re looking for skilled technical authors for our blog!

Leave a comment

If you’d like to ask a question about the code or piece of configuration, feel free to use or a similar tool as Facebook comments are breaking code formatting.