I recently had the problem where I needed to load files into a Docker container, but only once, and only at the startup of that container. Specifically, these were signing certificates stored in an S3 bucket. Obviously (I hope), we don’t want to bake sensitive certificates and keys into a Docker image, so we need some way to load the files into the container file system at startup. The containers run on AWS Fargate, and Fargate doesn’t support EBS or other persistent storage, so mounting a file system was out. Anyway, here’s how I accomplished the task; let me know if you have a better way.
Docker containers have the notion of “entrypoints”. From the documentation: “An ENTRYPOINT allows you to configure a container that will run as an executable.” Basically, it tells Docker what executable, and with what arguments, to run when the container starts up. Normally, you don’t have to specify this, as your base image usually does. For example, the default php:7.2-apache Dockerfile looks like this:
ENTRYPOINT ["docker-php-entrypoint"] COPY apache2-foreground /usr/local/bin/ WORKDIR /var/www/html EXPOSE 80 CMD ["apache2-foreground"]
So, somehow I need to inject my own entrypoint executable, then call the apache cmd. Here’s my Dockerfile:
FROM php:7.2-apache ...other docker commands here... ENTRYPOINT ["/var/www/my-app-entrypoint.sh"] CMD ["apache2-foreground"]
Basically, my-app-entrypoint.sh is a copy of the default “docker-php-entrypoint” script, except that I call a PHP script, and I don’t accept apache arguments (don’t use them):
#!/bin/bash set -e php -f /var/www/my-app-entrypoint.php exec "$@"
So now, every time a container based on this image starts up, my PHP script will be run. Here’s the code for that:
<?php require __DIR__ . '/vendor/autoload.php'; use Aws\S3\S3Client; $cert_dir = __DIR__ . '/certs'; $s3_objects = array_filter(explode(',', getenv('CERT_FILE_OBJECTS')), 'strlen'); if(!$s3_objects) { error_log('No S3 objects specified.'); exit(1); } if(!file_exists($cert_dir)) { mkdir($cert_dir); } $s3 = new S3Client([ 'version' => 'latest', 'region' => getenv('AWS_REGION') ]); foreach($s3_objects as $object) { try { $file = $s3->getObject([ 'Bucket' => getenv('S3_BUCKET'), 'Key' => $object, ]); if(!isset($file['ContentLength']) || empty($file['ContentLength']) || !isset($file['Body']) || empty($file['Body'])) { throw new Exception('Empty object content for ' . $object); } } catch(Exception $e) { if($e instanceof \Aws\Exception\AwsException) { $error_message = $e->getAwsRequestId() . '; ' . $e->getAwsErrorType() . '; ' . $e->getAwsErrorCode(); } else { $error_message = $e->getMessage(); } error_log($error_message); exit(1); } file_put_contents($cert_dir . '/' . basename($object), $file['Body']); }
It’s pretty simple, but this code is:
1) Checking to see if the cert dir exists, and if not, creating it
2) Using the AWS PHP SDK to grab the certs from an S3 bucket and writing them to the local cert directory
3) Killing the container if the certs aren’t there for some reason (app won’t run without them)
For this to work, you’ll need to set a few environment variables in the container: one for the aws region, one for the bucket name, one for the key/path of the cert files in the bucket, and two for the AWS credentials (AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID).
It’s kind of hacky, but I’ve found a lot of things around running on Docker and AWS feel hacky. Like I said, let me know a better way.