Serverless Puppeteer with AWS Lambda Layers and Node.js

Trying to use Puppeteer and Chromium in AWS Lambda is not as easy as it may seem, and many articles on the web proved to be outdated, incomplete or too complicated. After some research, I decided to share my steps here in case you are looking for the same thing.

Hopefully this guide will help you build your own serverless screenshot function and teach you something new in the process!

Image result for john travolta gif

Me trying to find the definitive article on Chromium and Lambda.

For this project I decided to use Serverless and Lambda Layers. I’m assuming you have a configured environment with AWS CLI installed, proper AWS credentials and Serverless.

Let’s get started!

Screenshots: THE FUNCTION

The goal here is to have a solution for capturing screenshots of web pages. We don’t want to run servers, so we’ll implement this as a Lambda function.

To keep it really simple, let’s say all the function needs to do is take a screenshot of the given URL page and upload the image to S3 .

Creating the project

First, we’ll create the folder structure for our Serverless project and initialize a Node.js project in there. I’ll keep it as simple as possible in order to focus on what’s relevant here.

mkdir -p lambda-screenshots/src
cd lambda-screenshots
npm init -y

Next, we need to install Serverless (as a dev dependency, since it’s not going to be used by our function):

npm install --save-dev serverless

Now in a typical puppeteer project, we are supposed to install puppeteer and use it like this:

const puppeteer = require("puppeteer");
puppeteer.launch(... // launch a browser

In a Lambda environment this is tricky and I ran into several problems – failing to bundle the right browser version; exceeding package size limits imposed by Lambda; facing incompatibility issues between Chromium version and puppeteer API; getting it to work in Node 8 but not in Node 12. It was one frustration after the other.

So instead of using puppeteer directly, we’ll use chrome-aws-lambda. I was happy to come across this module that elegantly simplifies the whole process, plus it works in Node LTS 8 to 12. Internally, the module does all kind of bootstrapping and wiring, but all we really need to care is to get the puppeteer instance via the exposed property:

const chromeLambda = require("chrome-aws-lambda");

All good, so far? Cool, let’s install this module then:

npm install --save-dev chrome-aws-lambda

What’s that? I hear you saying

“Wait Luis, you are installing it wrong! It’s not a dev dependency, since we are going to use this module in our function!”

Good point. But check this out: in production (i.e. AWS), the module will be preinstalled and ready to use.

Who needs roads when you have layers

How is that possible? Well, by using a Lambda layer that provides this module. The nice folks at packaged the module in a layer and published it in all AWS regions, and because layers can be shared across accounts, we can use it just by declaring it in our function definition (more on that later).

So the only purpose for installing this module as a dev dependency is just to get some nice intellisense and avoid IDE complains that “the module doesn’t exist”. Actually, it’s also necessary for testing or invoking the function locally, but that’s out of the scope of this post.

Writing the function

Let’s think a bit more in detail about the logic we need to implement in the function:

  1. Accept a JSON object with the property “url“, which contains the URL for the web page we want.
  2. Use puppeteer to launch a browser, navigate to the page and capture a screenshot.
  3. Upload the screenshot image to a bucket in S3.
  4. Return the screenshot image URL.

#1 comes in the Lambda event object; #2 we can implement with puppeteer; for #3 we can use the AWS SDK and #4 is just returning the URL string of the uploaded image.

Let’s take a look at the code:

// src/capture.js

// this module will be provided by the layer
const chromeLambda = require("chrome-aws-lambda");

// aws-sdk is always preinstalled in AWS Lambda in all Node.js runtimes
const S3Client = require("aws-sdk/clients/s3");

// create an S3 client
const s3 = new S3Client({ region: process.env.S3_REGION });

// default browser viewport size
const defaultViewport = {
  width: 1440,
  height: 1080

// here starts our function!
exports.handler = async event => {

  // launch a headless browser
  const browser = await chromeLambda.puppeteer.launch({
    args: chromeLambda.args,
    executablePath: await chromeLambda.executablePath,

  // open a new tab
  const page = await browser.newPage();

  // navigate to the page
  await page.goto(event.url);

  // take a screenshot
  const buffer = await page.screenshot()

  // upload the image using the current timestamp as filename
  const result = await s3
      Bucket: process.env.S3_BUCKET,
      Key: `${}.png`,
      Body: buffer,
      ContentType: "image/png",
      ACL: "public-read"

  // return the uploaded image url
  return { url: result.Location };

Hopefully that was pretty clear.

Image result for that was easy meme

Yup. Told you I was going to keep it simple!

Writing the Serverless manifest

Ok, now is time to write our Serverless file so we can deploy our function to AWS. Let me put the whole content here, and in addition to the inline comments I’ll provide more details below, but I won’t go into all of the stuff since many of the parameters are common Serverless definitions that you can see in their documentation.

# serverless.yml

service: lambdaScreenshot

  # change this name to something unique
  s3Bucket: screenshot-files

  name: aws
  region: us-east-1
  versionFunctions: false
  # here we put the layers we want to use
    # Google Chrome for AWS Lambda as a layer
    # Make sure you use the latest version depending on the region
    - arn:aws:lambda:${self:provider.region}:764866452798:layer:chrome-aws-lambda:10
  # function parameters
  runtime: nodejs12.x
  memorySize: 2048 # recommended
  timeout: 30
    - Effect: Allow
        - s3:PutObject
        - s3:PutObjectAcl
      Resource: arn:aws:s3:::${self:custom.s3Bucket}/*

    handler: src/capture.handler
      S3_REGION: ${self:provider.region}
      S3_BUCKET: ${self:custom.s3Bucket}

    # Bucket where the screenshots are stored
      Type: AWS::S3::Bucket
      DeletionPolicy: Delete
        BucketName: ${self:custom.s3Bucket}
        AccessControl: Private
    # Grant public read-only access to the bucket
      Type: AWS::S3::BucketPolicy
            - Effect: Allow
                - s3:GetObject
              Principal: "*"
              Resource: arn:aws:s3:::${self:custom.s3Bucket}/*
          Ref: screenshotsBucket

Just in case you missed it, here goes again: make sure you put your own bucket name here

  s3Bucket: screenshot-files # << change it!

otherwise you’ll get deploy issues because the bucket will already exist (and is not yours).

Now the juicy part:

  # ...
    - arn:aws:lambda:${self:provider.region}:764866452798:layer:chrome-aws-lambda:10

This is how we tell Serverless: “hey, when you create the function, add these layers”.

Every layer has an ARN, pretty much as any other AWS resource. If the owner grants public access, then any account can use this ARN to reference the layer and use it for their own functions. Which is the case for chrome-aws-lambda-layer.

The way layers work for Node.js runtimes, is that any module installed under the path /opt/nodejs/node_modules will be resolved by Node at runtime. Because the layer in question has the chrome-aws-lambda module installed in that path, when your function code runs and you require() this module, it is resolved and imported just fine.

Deploy time!

What? Deploy? Don’t we run it locally first?

Well, in order to keep it simple for this post, we’ll skip the local testing part. It’s generally easy to test functions with Serverless, but this case deviates a bit from the usual scenario. I’ll write another post related to function testing; in the meantime you can always check the Serverless documentation.

So, we have everything we need, now it’s time to deploy our function to AWS. For this we need to execute the following Serverless command:

sls deploy

Serverless will take its time (especially being the first time), but after a while it’ll be done. If you run into any issues, make sure to review the contents of serverless.yml.

Running the function

Let’s open the AWS console, and before moving on, let me show you something. Click “Layers” and notice the layer configured in our function:

Ok, now in the top right you’ll see a dropdown next to a “Test” button. Expand it and select “Configure test events“:

Give a name to the test event, clear everything in the textbox and enter the following:

  "url": ""

Click “Create“, then “Test“. After a few seconds, you should see a green (successful) result. Expand by clicking on “Details” and you will see your function response similar to this:

Copy the URL, paste it in your browser and there you go, a screenshot image courtesy of your serverless function 🦄🎉.

What now?

Of course this function would be better used if invoked by other means than just being triggered from the console. For example, you could plug an API Gateway source to expose it as an HTTP or WebSocket API.

Also you could add more customizations via the Puppeteer API: take a screenshot of the whole page top to bottom, or capture just a specific element; change some styles; manipulate the DOM, and more.

Or you could append more layers to enable additional functionality; for example you could use GraphicsMagick to manipulate the screenshot image before uploading it. There’s a curated list of Lambda layers that you can use, one of them is gm-lambda-layer.

Finally, I invite you to check out Chromda, the project that motivated this article.

Have fun!

11 thoughts on “Serverless Puppeteer with AWS Lambda Layers and Node.js

  1. Great article ! All latest details and ways to use chrome+puptr in the lambda. Thank You.
    One point: Given “serverless.yml” does not add “lambda layer” to the “lambda function”

  2. Thank you very much for this great article. struggled the last 2 day and now i’m happy to receive a positive result with your example

  3. Thanks for the article. I can’t get it to work though. When I deploy, it is trying to lump my code and the layer together and fit under the 261MB and it fails saying it is too big. Thoughts?

    1. Hey Earle, not sure what may be going on. Serverless shouldn’t be bundling all together but rather use the layer directory only. If you share a repo I can take a look.

  4. Thank you for great advice.

    I was just wondering.
    What is the cost of this? I mean, you have to launch whole new browser on every request, do you know what is the overhead price per request?
    Instead of running for exaple single instance in k8s and queuing jobs?

    Thank you

    1. Right now, I am using chrome_aws_lambda, puppeteer, puppeteer-core, and @types/puppeteer in lambda. I also use Sharp to resize into a thumbnail. I am not sure how chrome_aws_lambda is structured to use chromium but it does not try to load chromium into my lambda function. It seems to be using an instance some other way. When I save my html, my code creates the image and thumbnail and stores in the database along with the html. It takes around 3 seconds to finish. A better way would probably be to store the html in the database by itself and have a job go through and create the thumbnails in batch for those not having an image in the database. Probably a better user experience.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s