Serverless Puppeteer with AWS Lambda Layers and Node.js

Trying to use Puppeteer and Chromium in AWS Lambda is not as easy as it may seem, and many articles on the web proved to be outdated, incomplete or too complicated. After some research, I decided to share my steps here in case you are looking for the same thing.

Hopefully this guide will help you build your own serverless screenshot function and teach you something new in the process!

Image result for john travolta gif

Me trying to find the definitive article on Chromium and Lambda.

For this project I decided to use Serverless and Lambda Layers. I’m assuming you have a configured environment with AWS CLI installed, proper AWS credentials and Serverless.

Let’s get started!

Screenshots: THE FUNCTION

The goal here is to have a solution for capturing screenshots of web pages. We don’t want to run servers, so we’ll implement this as a Lambda function.

To keep it really simple, let’s say all the function needs to do is take a screenshot of the given URL page and upload the image to S3 .

Creating the project

First, we’ll create the folder structure for our Serverless project and initialize a Node.js project in there. I’ll keep it as simple as possible in order to focus on what’s relevant here.

mkdir -p lambda-screenshots/src
cd lambda-screenshots
npm init -y

Next, we need to install Serverless (as a dev dependency, since it’s not going to be used by our function):

npm install --save-dev serverless

Now in a typical puppeteer project, we are supposed to install puppeteer and use it like this:

const puppeteer = require("puppeteer");
puppeteer.launch(... // launch a browser

In a Lambda environment this is tricky and I ran into several problems – failing to bundle the right browser version; exceeding package size limits imposed by Lambda; facing incompatibility issues between Chromium version and puppeteer API; getting it to work in Node 8 but not in Node 12. It was one frustration after the other.

So instead of using puppeteer directly, we’ll use chrome-aws-lambda. I was happy to come across this module that elegantly simplifies the whole process, plus it works in Node LTS 8 to 12. Internally, the module does all kind of bootstrapping and wiring, but all we really need to care is to get the puppeteer instance via the exposed property:

const chromeLambda = require("chrome-aws-lambda");
chromeLambda.puppeteer.launch(...

All good, so far? Cool, let’s install this module then:

npm install --save-dev chrome-aws-lambda

What’s that? I hear you saying

“Wait Luis, you are installing it wrong! It’s not a dev dependency, since we are going to use this module in our function!”

Good point. But check this out: in production (i.e. AWS), the module will be preinstalled and ready to use.

Who needs roads when you have layers

How is that possible? Well, by using a Lambda layer that provides this module. The nice folks at shelf.io packaged the module in a layer and published it in all AWS regions, and because layers can be shared across accounts, we can use it just by declaring it in our function definition (more on that later).

So the only purpose for installing this module as a dev dependency is just to get some nice intellisense and avoid IDE complains that “the module doesn’t exist”. Actually, it’s also necessary for testing or invoking the function locally, but that’s out of the scope of this post.

Writing the function

Let’s think a bit more in detail about the logic we need to implement in the function:

  1. Accept a JSON object with the property “url“, which contains the URL for the web page we want.
  2. Use puppeteer to launch a browser, navigate to the page and capture a screenshot.
  3. Upload the screenshot image to a bucket in S3.
  4. Return the screenshot image URL.

#1 comes in the Lambda event object; #2 we can implement with puppeteer; for #3 we can use the AWS SDK and #4 is just returning the URL string of the uploaded image.

Let’s take a look at the code:

// src/capture.js

// this module will be provided by the layer
const chromeLambda = require("chrome-aws-lambda");

// aws-sdk is always preinstalled in AWS Lambda in all Node.js runtimes
const S3Client = require("aws-sdk/clients/s3");

// create an S3 client
const s3 = new S3Client({ region: process.env.S3_REGION });

// default browser viewport size
const defaultViewport = {
  width: 1440,
  height: 1080
};

// here starts our function!
exports.handler = async event => {

  // launch a headless browser
  const browser = await chromeLambda.puppeteer.launch({
    args: chromeLambda.args,
    executablePath: await chromeLambda.executablePath,
    defaultViewport 
  });

  // open a new tab
  const page = await browser.newPage();

  // navigate to the page
  await page.goto(event.url);

  // take a screenshot
  const buffer = await page.screenshot()

  // upload the image using the current timestamp as filename
  const result = await s3
    .upload({
      Bucket: process.env.S3_BUCKET,
      Key: `${Date.now()}.png`,
      Body: buffer,
      ContentType: "image/png",
      ACL: "public-read"
    })
    .promise();

  // return the uploaded image url
  return { url: result.Location };
};

Hopefully that was pretty clear.

Image result for that was easy meme

Yup. Told you I was going to keep it simple!

Writing the Serverless manifest

Ok, now is time to write our Serverless file so we can deploy our function to AWS. Let me put the whole content here, and in addition to the inline comments I’ll provide more details below, but I won’t go into all of the stuff since many of the parameters are common Serverless definitions that you can see in their documentation.

# serverless.yml

service: lambdaScreenshot

custom:
  # change this name to something unique
  s3Bucket: screenshot-files

provider:
  name: aws
  region: us-east-1
  versionFunctions: false
  # here we put the layers we want to use
  layers:
    # Google Chrome for AWS Lambda as a layer
    # Make sure you use the latest version depending on the region
    # https://github.com/shelfio/chrome-aws-lambda-layer
    - arn:aws:lambda:${self:provider.region}:764866452798:layer:chrome-aws-lambda:10
  # function parameters
  runtime: nodejs12.x
  memorySize: 2048 # recommended
  timeout: 30
  iamRoleStatements:
    - Effect: Allow
      Action:
        - s3:PutObject
        - s3:PutObjectAcl
      Resource: arn:aws:s3:::${self:custom.s3Bucket}/*

functions:
  capture:
    handler: src/capture.handler
    environment:
      S3_REGION: ${self:provider.region}
      S3_BUCKET: ${self:custom.s3Bucket}

resources:
  Resources:
    # Bucket where the screenshots are stored
    screenshotsBucket:
      Type: AWS::S3::Bucket
      DeletionPolicy: Delete
      Properties:
        BucketName: ${self:custom.s3Bucket}
        AccessControl: Private
    # Grant public read-only access to the bucket
    screenshotsBucketPolicy:
      Type: AWS::S3::BucketPolicy
      Properties:
        PolicyDocument:
          Statement:
            - Effect: Allow
              Action:
                - s3:GetObject
              Principal: "*"
              Resource: arn:aws:s3:::${self:custom.s3Bucket}/*
        Bucket:
          Ref: screenshotsBucket

Just in case you missed it, here goes again: make sure you put your own bucket name here

custom:
  s3Bucket: screenshot-files # << change it!

otherwise you’ll get deploy issues because the bucket will already exist (and is not yours).

Now the juicy part:

provider:
  # ...
  layers:
    - arn:aws:lambda:${self:provider.region}:764866452798:layer:chrome-aws-lambda:10

This is how we tell Serverless: “hey, when you create the function, add these layers”.

Every layer has an ARN, pretty much as any other AWS resource. If the owner grants public access, then any account can use this ARN to reference the layer and use it for their own functions. Which is the case for chrome-aws-lambda-layer.

The way layers work for Node.js runtimes, is that any module installed under the path /opt/nodejs/node_modules will be resolved by Node at runtime. Because the layer in question has the chrome-aws-lambda module installed in that path, when your function code runs and you require() this module, it is resolved and imported just fine.

Deploy time!

What? Deploy? Don’t we run it locally first?

Well, in order to keep it simple for this post, we’ll skip the local testing part. It’s generally easy to test functions with Serverless, but this case deviates a bit from the usual scenario. I’ll write another post related to function testing; in the meantime you can always check the Serverless documentation.

So, we have everything we need, now it’s time to deploy our function to AWS. For this we need to execute the following Serverless command:

sls deploy

Serverless will take its time (especially being the first time), but after a while it’ll be done. If you run into any issues, make sure to review the contents of serverless.yml.

Running the function

Let’s open the AWS console, and before moving on, let me show you something. Click “Layers” and notice the layer configured in our function:

Ok, now in the top right you’ll see a dropdown next to a “Test” button. Expand it and select “Configure test events“:

Give a name to the test event, clear everything in the textbox and enter the following:

{
  "url": "https://news.ycombinator.com/news"
}

Click “Create“, then “Test“. After a few seconds, you should see a green (successful) result. Expand by clicking on “Details” and you will see your function response similar to this:

Copy the URL, paste it in your browser and there you go, a screenshot image courtesy of your serverless function 🦄🎉.

What now?

Of course this function would be better used if invoked by other means than just being triggered from the console. For example, you could plug an API Gateway source to expose it as an HTTP or WebSocket API.

Also you could add more customizations via the Puppeteer API: take a screenshot of the whole page top to bottom, or capture just a specific element; change some styles; manipulate the DOM, and more.

Or you could append more layers to enable additional functionality; for example you could use GraphicsMagick to manipulate the screenshot image before uploading it. There’s a curated list of Lambda layers that you can use, one of them is gm-lambda-layer.

Finally, I invite you to check out Chromda, the project that motivated this article.

Have fun!

5 thoughts on “Serverless Puppeteer with AWS Lambda Layers and Node.js

  1. Great article ! All latest details and ways to use chrome+puptr in the lambda. Thank You.
    One point: Given “serverless.yml” does not add “lambda layer” to the “lambda function”

  2. Thank you very much for this great article. struggled the last 2 day and now i’m happy to receive a positive result with your example

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s