Javier

Exploring tide-lambda-listener

Hi there! welcome back, this time we are back to the exploration post. Today we will explore how to use Tide inside an aws lambda function.

Aws Lambda service not only provides a set of predefined runtimes (e.g. Node.js, Java, Ruby, etc) but also let you provide a custom runtime. This open the door to use for example the rust runtime from awslabs, but as the title say we want to use tide and in particular we will use the tide-lambda-listener crate that @fishrock123 published a couple of weeks ago.

Before begin our journey let's first run cargo init to start coding...

$ cargo init tide-lambda-listener-example-blog

Let's meet sam

We will use the sam cli to manage and deploy the needed resources to our application, and we asume that you already have an aws account and the sam cli installed.

Our example app will be very simple, will have only to routes :

- POST /hello -> save the greeting name in an external db.

- get /hello/:name -> get the greeting from the db.

And both will be exposed through aws apiGateway, so let's take a look of the template.yml used by sam cli.

# template.yml
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'

Globals:
    Function:
        MemorySize: 128
        Timeout: 10
        Environment:
            Variables:
                DATABASE_URL: "DB_URL"
Resources:
  GetHello:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: GetHello
      Handler: bootstrap
      Runtime: provided
      CodeUri: .
      Description: Test function
      Policies:
        - AWSLambdaBasicExecutionRole
      Events:
        HelloAPI:
          Type: Api
          Properties:
            Path: /hello/{name}
            Method: GET
    Metadata:
      BuildMethod: makefile
  PostHello:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: PostHello
      Handler: bootstrap
      Runtime: provided
      CodeUri: .
      Description: Test function
      Policies:
        - AWSLambdaBasicExecutionRole
      Events:
        HelloAPI:
          Type: Api
          Properties:
            Path: /hello
            Method: POST
    Metadata:
      BuildMethod: makefile

Outputs:
  MyApi:
    Description: "API Gateway endpoint URL"
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"

We defined two functions as resources, a couple of things to notice here are

  • Runtime must be set to provided
  • Handler is requires, but as we use a custom runtime is not really used.
  • CodeUri is set to . since we will use a makefile buildMethod.
  • BuildMethod should be set to makefile

Now when we run sam build the script will execute our Makefile looking the target build-<lambda fn logical id>, so our next step is to create the Makefile at root level of our repo with a target for each of our functions. Also notice that we need to target musl for running inside lambda.

# Makefile
build-GetHello: export CARGO_LAMBDA_FN=get_hello
build-GetHello:
	TARGET_CC=x86_64-linux-musl-gcc  RUSTFLAGS="-C linker=x86_64-linux-musl-gcc" cargo build --release --target x86_64-unknown-linux-musl
	cp ./target/x86_64-unknown-linux-musl/release/tide-lambda-listener-example $(ARTIFACTS_DIR)/bootstrap

build-PostHello: export CARGO_LAMBDA_FN=post_hello
build-PostHello:
	TARGET_CC=x86_64-linux-musl-gcc  RUSTFLAGS="-C linker=x86_64-linux-musl-gcc" cargo build --release --target x86_64-unknown-linux-musl
	cp ./target/x86_64-unknown-linux-musl/release/tide-lambda-listener-example $(ARTIFACTS_DIR)/bootstrap

There is one more trick there, we are exporting the name of the function to use with our build.rs file to allow us to compile just the code we need. But we will explain further later.

Just an small recap, until now we have two files that are used by the sam cli, the template that define the resources and the Makefile for compile our code. Let's take a look now to our actual rust code.

Back to rust

Time to back to rust now, and start adding the deps we will use for this app.

// Cargo.toml
[dependencies]
async-std = { version = "1.9.0", features = [ "attributes" ] }
tide = "0.16.0"
tide-lambda-listener = "0.1.3"
serde = { version = "1.0.115", features = ["derive"] }
serde_json = "1.0.57"
sqlx = { version = "0.5.5", features = ["runtime-async-std-rustls", "offline", "macros",  "json", "postgres"] }
dotenv = "0.15"

And now let's take a look to our main.rs

#[cfg(target_env = "musl")]
use tide_lambda_listener::LambdaListener;

use serde::{Deserialize, Serialize};
use sqlx::PgPool;
use sqlx::Pool;
#[cfg(not(target_env = "musl"))]
use tide::prelude::*;
use tide::Server;

mod functions;
#[cfg(not(target_env = "musl"))]
use functions::get_hello;
#[cfg(not(target_env = "musl"))]
use functions::post_hello;

#[cfg(target_env = "musl")]
include!(concat!(env!("OUT_DIR"), "/lambda.rs"));

#[derive(Clone, Debug)]
pub struct State {
    db_pool: PgPool,
}

#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct Greeting {
    name: String
}

#[async_std::main]
async fn main() -> tide::http::Result<()> {
    dotenv::dotenv().ok();
    tide::log::start();

    let db_url = std::env::var("DATABASE_URL").unwrap();

    let db_pool = make_db_pool(&db_url).await;
    let mut app = server(db_pool).await;
    let app_ref = &mut app;

    #[cfg(target_env = "musl")]
    {
        register_route(app_ref);
        app.listen(LambdaListener::new()).await?;
    }
    #[cfg(not(target_env = "musl"))]
    {
        post_hello::register_route(app_ref);
        get_hello::register_route(app_ref);

        let port = std::env::var("PORT").unwrap_or_else(|_| "8080".to_string());
        let mut listener = app
            .bind(format!("0.0.0.0:{}", port))
            .await
            .expect("can't bind the port");

        for info in listener.info().iter() {
            println!("Server listening on {}", info);
        }
        listener.accept().await.unwrap();
    }

    Ok(())
}

// helpers
async fn server(db_pool: PgPool) -> Server<State> {
    let state = State { db_pool };
    tide::with_state(state)
}

pub async fn make_db_pool(db_url: &str) -> PgPool {
    Pool::connect(db_url).await.unwrap()
}

A couple of things to notice here:

  • Allow run locally and in lambda

Since we are targeting lambda we need to compile to musl but we also want have a way to run this code locally so I'm using #[cfg(..)] attributes to mark which part need to compile based on the compilation target.

  • Lambda listener don't need to bind any port.

Custom runtime need to implement the lambda interface, so they need to fetch the next request in order to execute and then return the response and that why to can see that the listener don't bind any port

   app.listen(LambdaListener::new()).await?;
  • Compile only the needed code

Since we need to upload the code for each function we want to compile the code needed for only that function and we will use a build script to help us.

Let's build

The idea is to split the code for each function in separated files and rely on the build script to copy the content to a new file called lambda.rs that will be included in our main.rs when we targeting musl.

So, this is our filesystem tree

|____main.rs
|____functions
| |____post_hello.rs
| |____get_hello.rs
| |____mod.rs

And our build.rs file get which file to copy from an env variable and create the lambda.rs file.

// build.rs
use std::env;
use std::fs;
use std::path::Path;

fn main() {
    let target_env = env::var("CARGO_CFG_TARGET_ENV").unwrap();
    if target_env == "musl" {
        println!("cargo:warning= BUILDING FOR LAMBDA");
        let lambda_fn =
            env::var("CARGO_LAMBDA_FN").expect("Invalid env var CARGO_USE_LAMBDA_FN, must be set");
        let input_path = Path::new(&env::var("CARGO_MANIFEST_DIR").unwrap())
            .join(format!("src/functions/{}.rs", lambda_fn));
        let out_dir = env::var_os("OUT_DIR").unwrap();
        let dest_path = Path::new(&out_dir).join("lambda.rs");
        fs::copy(input_path, dest_path).unwrap();
    } else {
        println!("cargo:warning= NO BUILDING FOR LAMBDA");
    }
}

And then, and again only if we are targeting musl, we use this line in our main.rs to include the created file (lambda.rs).

#[cfg(target_env = "musl")]
include!(concat!(env!("OUT_DIR"), "/lambda.rs"));

Last piece of the puzzle

We already set our files (template.yml, Makefile) to work with the sam cli and our main.rs and build.rs to conditional compile our code depending on the target.

It's time now to take a look of our function files and see how we call the right function from our main.rs.

Our function files have two functions, one to register the route and the actual handler to handle the request.

// get_hello.rs
use sqlx::query_as;
use tide::{Error, Request, Response};

pub fn register_route(app: &mut tide::Server<crate::State>) {
    app.at("/hello/:name").get(handler);
}

pub async fn handler(req: Request<crate::State>) -> tide::Result {
    let name = req.param("name")?;

    let db_pool = req.state().db_pool.clone();
    let row = query_as!(
        crate::Greeting,
        r#"
        SELECT name FROM "tide-lambda-example-greetings"
        WHERE name = $1
        "#,
        name
    )
    .fetch_optional(&db_pool)
    .await
    .map_err(|e| Error::new(409, e))?;

    let res = match row {
        None => Response::new(404),
        Some(row) => {
            let mut r = Response::new(200);
            r.set_body(format!("Hi again {}, nice to see you.", row.name));
            r
        }
    };

    Ok(res)
}
// post_hello.rs
use sqlx::query;
use tide::{Error, Request, Response};

pub fn register_route(app: &mut tide::Server<crate::State>) {
    app.at("/hello").post(handler);
}

pub async fn handler(mut req: Request<crate::State>) -> tide::Result {
    let greeting: crate::Greeting = req.body_json().await?;
    let db_pool = req.state().db_pool.clone();

    query!(
        r#"
        INSERT INTO "tide-lambda-example-greetings" (name) VALUES
        ($1) returning name
        "#,
        greeting.name
    )
    .fetch_one(&db_pool)
    .await
    .map_err(|e| match e.as_database_error() {
        Some(_) => Error::from_str(400, "You already say hi!"),
        None => Error::new(409, e),
    })?;

    let mut res = Response::new(201);

    res.set_body(format!(
        "Hello {}, welcome to this tide lambda example.",
        greeting.name
    ));
    Ok(res)
}

So, now in our main.rs file we can do...

    #[cfg(target_env = "musl")]
    {
        register_route(app_ref);
        app.listen(LambdaListener::new()).await?;
    }

And we register the route in our tide app :)

One more thing

We are almost there, we just need to add our .env file with the config for our db.

DATABASE_URL="postgresql db url"

Now we are ready to test our app locally

cargo run
   Compiling tide-lambda-listener-example v0.1.0 (~/personal/rust/tide-lambda-listener-example)
warning:  NO BUILDING FOR LAMBDA
    Finished dev [unoptimized + debuginfo] target(s) in 29.87s
     Running `target/debug/tide-lambda-listener-example`
tide::log Logger started
    level Info
Server listening on http://0.0.0.0:8080

And test it

❯ curl -X POST -d '{"name":"ferris"}' http://0.0.0.0:8080/hello
Hello ferris, welcome to this tide lambda example.

❯ curl  http://0.0.0.0:8080/hello/ferris
Hi again ferris, nice to see you.

Nice! both functions works as expected and we can save and retrieve the greetings :)

Now it's time to build and deploy to aws and see if works as expected....

❯ sam build
Building codeuri: . runtime: provided metadata: {'BuildMethod': 'makefile'} functions: ['GetHello']
Running CustomMakeBuilder:CopySource
Running CustomMakeBuilder:MakeBuild
Current Artifacts Directory : /Users/pepo/personal/rust/tide-lambda-listener-example/.aws-sam/build/GetHello
Building codeuri: . runtime: provided metadata: {'BuildMethod': 'makefile'} functions: ['PostHello']
Running CustomMakeBuilder:CopySource
Running CustomMakeBuilder:MakeBuild
Current Artifacts Directory : /Users/pepo/personal/rust/tide-lambda-listener-example/.aws-sam/build/PostHello

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Deploy: sam deploy --guided

Nice!! we can now deploy our function using the --guided flag to complete some information.

❯ sam deploy --guided

And the we will get something like this as output to use


Description         API Gateway endpoint URL
Value               https://wxxogq7l66.execute-api.us-east-2.amazonaws.com/Prod/hello/
-----------------------------------------------------------

Successfully created/updated stack - tide-example in us-east-2

And now we can check our functions live in lambda with the new endpoint url

❯ curl -X POST -d '{"name":"rustacean"}' https://wxxogq7l66.execute-api.us-east-2.amazonaws.com/Prod/hello
Hello rustacean, welcome to this tide lambda example.

❯ curl https://wxxogq7l66.execute-api.us-east-2.amazonaws.com/Prod/hello/rustacean
Hi again rustacean, nice to see you.

That's all for today, we explored how to create a simple serverless app with tide and tide-lambda-listener , compiling only the needed code and using the sam cli to create all the resources in aws for us.

You can check the complete code in the repo.

As always, I write this as a learning journal and there could be another more elegant and correct way to make this to works in lambda and any feedback is welcome.

Thanks!