Recently in Technology Category

How to convert ElasticBeanstalk application to Lambda

The goal - take the code that has been running in an ElasticBeanstalk environment and run it as a Lambda job, triggering whenever a file is dropped into an S3 bucket.

The Requirement - To properly deploy it into our prod environment, all resources must be deployed via CloudFormation. Note that we are not the development team, so we are assuming that some code has been written and uploaded as a .war/.zip file to an S3 bucket. This means that, at a high level, we need three deployments to:

Deployments

  1. Deployment
    • Create an IAM role that uses the same policy as the EB role, but can assume lambda.amazonaws.com as its role. Also include several managed policies to let the Lambda instances come into being
    • Create a Lambda function, loading its code from a .war file uploaded to S3. Assign it the role
    • Create an S3 bucket for sourcing files
  2. Deployment
    • Create a Lambda permission, note that this is a thing in the Lambda namespace, not IAM, that allows the S3 bucket to invoke the lambda function. This cannot be done until the Lambda function and the S3 bucket have been created (deployment 1)
  3. Deployment
    • Update the S3 bucket from deployment 1 to notify the Lambda function. This cannot be done until the Lambda and the Lambda permission are created, since creation runs a test notification that must succeed for the update to be sucessful.

Cloudformation Samples [1]

Lambda Role

This is the IAM role given to the running Lambda instance. The example given spawns a Lambda inside an existing VPC, so needs the managed VPC role. If you are running outside a VPC, a different managed policy is needed.

    "LambdaRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "RoleName" : "LambdaRole",
        "ManagedPolicyArns" : [
          "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
        ],
        "AssumeRolePolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [  
            {
              "Effect": "Allow",
              "Principal": {
                "Service": [ "lambda.amazonaws.com" ]
              },
              "Action": [ "sts:AssumeRole" ]
          }]
        },
        "Path": "/"
      }
    },

Lambda Function

The actual lambda function definition. Needs to have the code uploaded to S3 in order to deploy. This can be run in parallel with the IAM role creation. This example builds a Lambda that runs in Java8, but Node.js and Python would be similar. In this sample the Lambda is given a SecurityGroup to allow it access to back-end services (RDS, etc), where access is by source group.

    "SearchLambda": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Description" : "Description Text",
        "FunctionName" : { "Fn::Join" : ["-", [{"Ref" : "EnvTag"}, "import", "lambda01"]]  },
        "Handler": "org.hbsp.common.lambda.pim2cs.S3EventHandler",
        "Role": { "Fn::GetAtt" : ["LambdaRole", "Arn"] },
        "MemorySize" : "512",
        "Code": {
          "S3Bucket": "Sourcecode-BucketName",
          "S3Key": { "Fn::Join" : ["/", ["directory/path", {"Ref" : "EnvTag"}, "artifact-name-version.zip"]]}
        },
        "Runtime": "java8",
        "Timeout": "300",
        "VpcConfig" : {
          "SecurityGroupIds" : [
            {"Ref" : "AppServerSG"}
          ],
          "SubnetIds" : [
            { "Ref" : "PriSubnet1" },
            { "Ref" : "PriSubnet2" },
            { "Ref" : "PriSubnet3" }
          ]
        }
      }
    }

S3 Bucket (withouth Notifcations)

Initial deployment of S3 bucket to create it. This is needed for the Lambda permissions, but cannot have notifications attached yet.

    "PlatformBucketQA":{
     "Type": "AWS::S3::Bucket",
     "Properties" : {
       "BucketName" : "sp-transfer-qa",
       "Tags" : [
           <Many Tags Go Here>
       ],
       "LoggingConfiguration" : {
         "DestinationBucketName" : "logbucket",
         "LogFilePrefix" : "s3/"
       }
     }
   },

Lambda Permission

This assigns calling permission TO the lambda function from the source S3 bucket. Both of those must be already created before this can be executed. It is possible that this would work with a "DependsOn" clause, but I find it easier to simply deploy this as a seperate step from the Lambda and Bucket

    "SearchLambdaPerm": {
      "Type": "AWS::Lambda::Permission",
      "Properties" : {
        "Action": "lambda:InvokeFunction",
        "FunctionName": {"Ref": "SearchLambda"},
        "Principal": "s3.amazonaws.com",
        "SourceAccount": {"Ref": "AWS::AccountId"},
        "SourceArn": { "Fn::Join": [":", [
            "arn", "aws", "s3", "" , "", {"Ref" : "PlatformBucketQA"}]]
        }
      }
    },

S3 Bucket (with Notifcations)

This is an addition to the previous S3 bucket code, adding the specific notification configurations. In this model only files created, where created includes renaming/moving files in the bucket, that match the glob asset/incoming/*xml. The "Event" parameter can be changed to trigger on different S3 actions

    "PlatformBucketQA":{
     "Type": "AWS::S3::Bucket",
     "Properties" : {
       "BucketName" : "sp-transfer-qa",
       "Tags" : [
           <Many Tags Go Here>
       ],
       "NotificationConfiguration": {
         "LambdaConfigurations": [
           {
             "Event" : "s3:ObjectCreated:*",
             "Function" : { "Fn::GetAtt" : ["SearchLambda", "Arn"] },
             "Filter" : {
               "S3Key" : {
                 "Rules" : [
                   {
                     "Name" : "prefix",
                     "Value" : "asset/incoming"
                   },
                   {
                     "Name" : "suffix",
                     "Value" : "xml"
                   }
                 ]
               }
             }
           }
         ]
       },
       "LoggingConfiguration" : {
         "DestinationBucketName" : "logbucket",
         "LogFilePrefix" : "s3/"
       }
     }
   },

  1. In JSON, the code in YAML will have the same fields, just different structure

Take a walk

| No Comments | No TrackBacks
To distract you from impending snow, 7:48 of animated walking

Walking City from Universal Everything on Vimeo.

A

Stupid Linux Tricks

| No Comments | No TrackBacks
tcp        0      0 XX.XX.XX.XX:44101         XX.XX.XX.XX:1521          ESTABLISHED 

That is Java's Oracle driver choosing a source port at random, which is what it does all the time. Only this time the port chosen is used by Java's RMI, so a different app was unable to startup, it failed when trying to bind to port 44101. Of course Oracle connections come and go, so with no user action, the problem went away in about 10 minutes

A

Ejected Hard Disk.jpg


I've had plenty of hard-drive heads crash, but I've never had one eject and try to parachute to safety.  Hard to see, but the little curve of metal half-hidden under the top of the magnet at 12 o'clock is the head from the top arm

A

Biblindex

| No Comments | No TrackBacks

origene.jpgBibindex is one of those fantastic projects of the internet, where you can search for something that used to be only available to scholar of a specific field.  In this case it's a corpus of 400,000 biblical refernces in Jewish and Christian literature of late antiquity throught the middle ages.  You pick the chapter and verse, and optionally the author, country of origin or date, and it pulls up every reference to that section in those texts.  The UI is pretty poorly done, the + button means "select this thing and give me an optional other" which is entirely non-obvious, but it is powerful.  Free, but registration required

A

Helpful hint, if you have a bash script with a function calling getopts, i.e.

make_tarball() {
    local START=`get_start_time`
    ## declare some variables
.....
    local OPTARG OPTIND
    ## Parse the args
    while getopts "s:d:a:x:X:v" ARG
    do
	case $ARG in
	    s) SOURCE_PATH="$SOURCE_PATH $OPTARG"
		;;
....
        esac
   done

you MUST make OPTIND a local variable, or a second (third, etc) call to the function will fail, with OPTIND too large to see any of the arguments

blegh
A

About this Archive

This page is an archive of recent entries in the Technology category.

Semitics is the previous category.

Type is the next category.

Find recent content on the main index or look in the archives to find all content.

OpenID accepted here Learn more about OpenID