Recently in AWS Category

Installing DigiPal - Part 2, Scripts and Configuration

Once you have all the parts installed, there is some configuration that the Dockerfile takes care of automatically that you will have to do by hand

Postgres

(The AWS-way to do this would be to spin up a Postgres RDS instance, but that was too many variables for a first try. I leave it as an exercise to the reader)

  • Login to postgres and create a digipal DB and digipal user, for that the instructions in GitHub are accurate, see the databasesection, until the step that starts "After that, run in your terminal the following commands:". 

  • By default postgres is not configured to allow password login locally, only "account" based socket login. You need to change that by editing /var/lib/pgsql93/data/pg_hba.conf. Set the line that looks like 

local   all             all                                   ident

to be

local   all             all                                     md5

Replace ident with md5 and restart postgres. You can now continue in the GitHub instructions from "After that, run in your terminal the following commands:"

Lighttpd

  • Add to /etc/lighttpd/lighttpd.conf, at the endinclude "vhosts.d/digipal.conf"
  • uncomment in /etc/lighttpd/modules.confinclude "conf.d/fastcgi.conf"
  • create /etc/lighttpd/vhosts.d/digipal.conf
#include_shell "/usr/share/lighttpd/create-mime.assign.pl"
#include_shell "/usr/share/lighttpd/include-conf-enabled.pl"
fastcgi.server = ( "/iip/iipsrv.fcgi" =>
  (( "host" => "127.0.0.1",
     "port" => 9000,
     "check-local" => "disable",
     "min-procs" => 1,
     "max-procs" => 1,
     "bin-path" => "/etc/lighttpd/iipsrv.fcgi",
     "bin-environment" => (
        "LOGFILE" => "/tmp/iipsrv.log",
        "VERBOSITY" => "10",
        "MAX_IMAGE_CACHE_SIZE" => "20",
#        "FILENAME_PATTERN" => "_pyr_",
        "JPEG_QUALITY" => "75",
        "MAX_CVT" => "3000",
        "FILESYSTEM_PREFIX" => "/apps/digipal/images/"
      )
  ))
)

NB. you will need to set the FILESYSTEM_PREFIX as needed for your environment

  • Copy iipserv.cgi to /etc/lighttpd/ and set it as executable: chmod a+rx /etc/lighttpd/iipserv.cgi

Access

As I have my system configured these apps are not accessable to the outside world, but only to localhost. When I want to use it, I ssh to the machine and do local port forwarding, i.e.ssh -L 8000:localhost:8000 -L 8081:localhost:8081 ${USER}@${SERVER}The port 8000 is used for the web server, 8081 for the image server (that's what lighttpd does). You should then be able to access your digipal by pointing a browser to http://localhost:8000

NB. this worked for me, after some hacking around, but I provide no guarantee that these instructions are complete.  Moniti estis

Digipal in AWS, Part 1

| No Comments

Installing DigiPal in AWS

There is some documentation on spinning up Digipal in a server, versus the Docker distribution, on their Github, but I found that information incomplete. Here is what I had to do to configure a running instance in AWS, running on AWS's linux AMI, which is mostly like Fedora. Note that I do not have Nginx running, so there's still some port weirdness. This is part 1 of 2, we'll just install and build everything. Part 2 will have configuring lighttpd to serve the images and postgresql account stuff

Pre-reqs

What to install

The following packages all to be installed with yum. There's a single-line at the bottom if you want to just do it

  • git - git (duh)
  • Postgres - postgresql93 postgresql93-server postgresql93-devel
  • ImageMagick - (note that we only need the CLI tools, not the devel libs) ImageMagick
  • libmysqlclient-devel - (stated in the reqs, no idea why) mysql56-devel
  • lighttpd - (needed for iip fastcgi) lighttpd lighttpd-fastcgi
  • Pre-reqs to build IIP - gcc, gcc-g++ libxml2-devel libxml2-python27 libxslt-devel automake autoconf libtool libjpeg-devel libtiff-devel
  • Lessc - (used by Digipal in the CLI, runs in Node.js. Note that this is not part of the AWS repos, so you have to pull from epel) nodejs npm python-lesscpy

Big omnibus commands, need to be run as root or via sudo

yum install git postgresql93 postgresql93-server postgresql93-devel ImageMagick 
mysql56-devel gcc, gcc-g++ libxml2-devel libxml2-python27 libxslt-devel automake
autoconf libtool libjpeg-devel libtiff-devel lighttpd lighttpd-fastcgi
yum install nodejs npm python-lesscpy --enablerepo=epel

What to build

  • lesscp - needs to be built by npm, -g makes it global - npm install -g less

IIP

  • Get the code from git - git clone https://github.com/ruven/iipsrv.git
  • Build it:
cd iipserv
./autogen.sh
./configure
make
make check
sudo cp src/iipsrv.fcgi /etc/lighttpd/

DigiPal

Now time to install digipal itself

  • get the code from Git
git clone https://github.com/kcl-ddh/digipal
cd digipal
git checkout 1.2.1a
  • Now follow their instructions to get dependencies and install
    pip install -r requirements.txt

NB 1.2.1a is the current version in git. If you want to look at the versions, go to the GitHub page and click the "branches" dropdown. Replace 1.2.1a with any of those values verbatum

How to convert ElasticBeanstalk application to Lambda

The goal - take the code that has been running in an ElasticBeanstalk environment and run it as a Lambda job, triggering whenever a file is dropped into an S3 bucket.

The Requirement - To properly deploy it into our prod environment, all resources must be deployed via CloudFormation. Note that we are not the development team, so we are assuming that some code has been written and uploaded as a .war/.zip file to an S3 bucket. This means that, at a high level, we need three deployments to:

Deployments

  1. Deployment
    • Create an IAM role that uses the same policy as the EB role, but can assume lambda.amazonaws.com as its role. Also include several managed policies to let the Lambda instances come into being
    • Create a Lambda function, loading its code from a .war file uploaded to S3. Assign it the role
    • Create an S3 bucket for sourcing files
  2. Deployment
    • Create a Lambda permission, note that this is a thing in the Lambda namespace, not IAM, that allows the S3 bucket to invoke the lambda function. This cannot be done until the Lambda function and the S3 bucket have been created (deployment 1)
  3. Deployment
    • Update the S3 bucket from deployment 1 to notify the Lambda function. This cannot be done until the Lambda and the Lambda permission are created, since creation runs a test notification that must succeed for the update to be sucessful.

Cloudformation Samples [1]

Lambda Role

This is the IAM role given to the running Lambda instance. The example given spawns a Lambda inside an existing VPC, so needs the managed VPC role. If you are running outside a VPC, a different managed policy is needed.

    "LambdaRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "RoleName" : "LambdaRole",
        "ManagedPolicyArns" : [
          "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
        ],
        "AssumeRolePolicyDocument": {
          "Version" : "2012-10-17",
          "Statement": [  
            {
              "Effect": "Allow",
              "Principal": {
                "Service": [ "lambda.amazonaws.com" ]
              },
              "Action": [ "sts:AssumeRole" ]
          }]
        },
        "Path": "/"
      }
    },

Lambda Function

The actual lambda function definition. Needs to have the code uploaded to S3 in order to deploy. This can be run in parallel with the IAM role creation. This example builds a Lambda that runs in Java8, but Node.js and Python would be similar. In this sample the Lambda is given a SecurityGroup to allow it access to back-end services (RDS, etc), where access is by source group.

    "SearchLambda": {
      "Type": "AWS::Lambda::Function",
      "Properties": {
        "Description" : "Description Text",
        "FunctionName" : { "Fn::Join" : ["-", [{"Ref" : "EnvTag"}, "import", "lambda01"]]  },
        "Handler": "org.hbsp.common.lambda.pim2cs.S3EventHandler",
        "Role": { "Fn::GetAtt" : ["LambdaRole", "Arn"] },
        "MemorySize" : "512",
        "Code": {
          "S3Bucket": "Sourcecode-BucketName",
          "S3Key": { "Fn::Join" : ["/", ["directory/path", {"Ref" : "EnvTag"}, "artifact-name-version.zip"]]}
        },
        "Runtime": "java8",
        "Timeout": "300",
        "VpcConfig" : {
          "SecurityGroupIds" : [
            {"Ref" : "AppServerSG"}
          ],
          "SubnetIds" : [
            { "Ref" : "PriSubnet1" },
            { "Ref" : "PriSubnet2" },
            { "Ref" : "PriSubnet3" }
          ]
        }
      }
    }

S3 Bucket (withouth Notifcations)

Initial deployment of S3 bucket to create it. This is needed for the Lambda permissions, but cannot have notifications attached yet.

    "PlatformBucketQA":{
     "Type": "AWS::S3::Bucket",
     "Properties" : {
       "BucketName" : "sp-transfer-qa",
       "Tags" : [
           <Many Tags Go Here>
       ],
       "LoggingConfiguration" : {
         "DestinationBucketName" : "logbucket",
         "LogFilePrefix" : "s3/"
       }
     }
   },

Lambda Permission

This assigns calling permission TO the lambda function from the source S3 bucket. Both of those must be already created before this can be executed. It is possible that this would work with a "DependsOn" clause, but I find it easier to simply deploy this as a seperate step from the Lambda and Bucket

    "SearchLambdaPerm": {
      "Type": "AWS::Lambda::Permission",
      "Properties" : {
        "Action": "lambda:InvokeFunction",
        "FunctionName": {"Ref": "SearchLambda"},
        "Principal": "s3.amazonaws.com",
        "SourceAccount": {"Ref": "AWS::AccountId"},
        "SourceArn": { "Fn::Join": [":", [
            "arn", "aws", "s3", "" , "", {"Ref" : "PlatformBucketQA"}]]
        }
      }
    },

S3 Bucket (with Notifcations)

This is an addition to the previous S3 bucket code, adding the specific notification configurations. In this model only files created, where created includes renaming/moving files in the bucket, that match the glob asset/incoming/*xml. The "Event" parameter can be changed to trigger on different S3 actions

    "PlatformBucketQA":{
     "Type": "AWS::S3::Bucket",
     "Properties" : {
       "BucketName" : "sp-transfer-qa",
       "Tags" : [
           <Many Tags Go Here>
       ],
       "NotificationConfiguration": {
         "LambdaConfigurations": [
           {
             "Event" : "s3:ObjectCreated:*",
             "Function" : { "Fn::GetAtt" : ["SearchLambda", "Arn"] },
             "Filter" : {
               "S3Key" : {
                 "Rules" : [
                   {
                     "Name" : "prefix",
                     "Value" : "asset/incoming"
                   },
                   {
                     "Name" : "suffix",
                     "Value" : "xml"
                   }
                 ]
               }
             }
           }
         ]
       },
       "LoggingConfiguration" : {
         "DestinationBucketName" : "logbucket",
         "LogFilePrefix" : "s3/"
       }
     }
   },

  1. In JSON, the code in YAML will have the same fields, just different structure

About this Archive

This page is an archive of recent entries in the AWS category.

Watches is the previous category.

Manuscript is the next category.

Find recent content on the main index or look in the archives to find all content.

OpenID accepted here Learn more about OpenID