Errors pushing an image to a new ECR repo on AWS

Hey everyone,

I normally use DigitalOcean or Azure for docker and kubernetes but have decided to give AWS a go this time around. I was following a guide on deploying an image to a new ECR repo and hit a couple of issues.

The first was that running the login command output help options instead of the password I was expecting:

aws ecr get-login --no-include-email --region us-east-2

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

aws: error: argument operation: Invalid choice, valid choices are:

batch-check-layer-availability           | batch-delete-image                      
batch-get-image                          | batch-get-repository-scanning-configuration
complete-layer-upload                    | create-pull-through-cache-rule          
create-repository                        | delete-lifecycle-policy                 
delete-pull-through-cache-rule           | delete-registry-policy                  
delete-repository                        | delete-repository-policy                
describe-image-replication-status        | describe-image-scan-findings            
describe-images                          | describe-pull-through-cache-rules       
...

This turned out to be an issue because the command had been deprecated. Instead, use the following:

aws ecr get-login-password | docker login --username AWS --password-stdin "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.<REGION_ID>.amazonaws.com"

There’s a pretty detailed thread on github here: https://github.com/aws/aws-cli/issues/5014

The second issue I ran into was an error while trying to run the new command:

An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:iam::<ACCOUNT_ID>:user/<USER> is not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken action

Adding the following role to my user resolved the issue: AmazonEC2ContainerRegistryPowerUser

Once I was passed this, I hit another issue using the command from the github link above:

Error response from daemon: login attempt to https://<ACCOUNT_ID>.dkr.ecr.us-east-2.amazonaws.com/v2/ failed with status: 400 Bad Request

This took a bit of digging, but eventually I came across a thread where someone was using the same command and had hit the same issue. Adding the region to the get-login-password call seemed to fix it:

aws ecr get-login-password --region <REGION_ID> | docker login --username AWS --password-stdin "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.<REGION_ID>.amazonaws.com"

I was finally getting a login succeeded message and my push was working. This was the thread mentioning the region id just in case you need a bit more info: https://github.com/aws/aws-cli/issues/5317#issuecomment-835645395

Console.log output not appearing – AWS SAM Node.js

Hi everyone,

I ran into a bit of an interesting issue today after updating AWS SAM. All of my node log output stopped appearing in my console locally.

For now, there’s a pretty simple workaround:

sam build --use-container && sam local start-api 2>&1 | tr "\r" "\n"

Append 2>&1 | tr “\r” “\n” (including quotes) to your start-api command and you should begin to see the output as expected:

Thanks to the following links for the info:

CREATE_IN_PROGRESS when creating a certificate with CloudFormation

Hi everyone,

I ran into a bit of an issue today while creating a certificate with CloudFormation. After kicking the stack off it ended up hanging on a step to create a domain verification entry in Route 53.

I had used this script multiple times for creating a certificate for a subdomain, but this time I included an apex domain as well. In order to narrow things down a little further I checked out the certificate via the console:

While the subdomain had passed the apex domain was still sitting in pending. Surprisingly, in Route53 the record DID exist. In order to get things moving again I manually deleted the record and then clicked “Create records in Route 53”.

This re-created the record I’d just deleted, and after a couple of minutes the domain validation passed and then the certificate was created:

This was a bit of a weird one that I have been unable to reproduce. I’m not certain why the DNS validation ended up hanging but retriggering the process seems to have resolved it.

Note that there are other legitimate reasons why your deployment might be hanging at this step:

When you use the AWS::CertificateManager::Certificate resource in a CloudFormation stack, domain validation is handled automatically if all three of the following are true: The certificate domain is hosted in Amazon Route 53, the domain resides in your AWS account, and you are using DNS validation.

However, if the certificate uses email validation, or if the domain is not hosted in Route 53, then the stack will remain in the CREATE_IN_PROGRESS state. Further stack operations are delayed until you validate the certificate request, either by acting upon the instructions in the validation email, or by adding a CNAME record to your DNS configuration. For more information, see Option 1: DNS Validation and Option 2: Email Validation.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-certificatemanager-certificate.html

Adding a Custom Domain Name – AWS SAM

Hi everyone,

It’s been a long time but I’m messing around with AWS SAM again. I’m currently converting www.testerwidgets.com into an AWS SAM application. As part of this I needed to add a custom domain. Unfortunately, the doco on how to do this isn’t great so I’m going to share what ended up working for me just in case it helps someone else.

To start, these are the resources that you’ll need in your template.yaml:

Resources:

  ApiCertificate:
    Type: AWS::CertificateManager::Certificate
    Properties:
      DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
      ValidationMethod: DNS

  ApiGatewayApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: !Ref Stage
      # Allows www.YOUR_DOMAIN.com to call these APIs
      # SAM will automatically add AllowMethods with a list of methods for this API
      Cors: "'www.YOUR_DOMAIN.com'"
      EndpointConfiguration: REGIONAL
      Domain:
        DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
        CertificateArn: !Ref ApiCertificate
        Route53:
          HostedZoneName: "YOUR_DOMAIN.com." # NOTE: The period at the end is required

You’ll also need to make sure your reference the gateway from your function:

  # This lambda function handles is used to test endpoint availability.
  getPing:
    Type: AWS::Serverless::Function
    Properties:
      Handler: src/handlers/get-ping.getPingHandler
      Runtime: nodejs14.x
      Architectures:
        - x86_64
      MemorySize: 128
      Timeout: 30
      Events:
        Api:
          Type: Api
          Properties:
            Path: /ping2 # NOTE: AWS overrides the ping command.
            Method: GET
            RestApiId:
              Ref: ApiGatewayApi # NOTE: Make sure you have this referencing correctly.
      Description: Responds with 'Pong' if successful.

Now when you run AWS deploy it will continue as usual until it gets to the stage of creating your certificate:

Here it is looking for a specific DNS entry in order to confirm that you own the domain. You’ll need to go into Route53 (or whichever other DNS provider you’re using) and add a CNAME entry with the specified details:

Note that your name and value contents should come from the output of the ApiCertificate job (highlighted in the screenshot above).

Once that’s done you’ll need to wait about sixty seconds for the DNS records to propagate within AWS. You should then be able to access your api using the new domain:

Thanks to the follow github post for the pointers in the right direction: https://github.com/aws/aws-sam-cli/issues/2290

aws sam No hosted zones named found

Note that if you get the error above when trying to deploy please ensure that you’ve added the trailing “.” to your Route53 HostedZoneName in the api-gateway in your template.yaml:

Domain:
        DomainName: !Sub api-${Stage}.yourdomain.com
        CertificateArn: !Ref ApiCertificate
        Route53:
          HostedZoneName: "your-domain.com." # NOTE: The period at the end is required

Configure AWS Route53 domain to point to DigitalOcean name servers

Hey everyone,

This is a quick post on how to point your AWS Route53 domain to DigitalOcean. I’m currently messing around with Kubernetes on DigitalOcean (DOKS) and want to use their name servers to nginx.

The guide I was following (https://github.com/digitalocean/Kubernetes-Starter-Kit-Developers/blob/main/03-setup-ingress-controller/nginx.md) was missing a specific walkthrough for Route53 so I’m just posting what I did in case anyone else finds it useful.

To start, open up the “Registered Domains” tab on Route 53: https://console.aws.amazon.com/route53/home#DomainListing

Then click on your domain and under name servers click “Add or edit name servers”:

Replace the existing aws nameservers with the digital ocean ones and then click update:

The values that you’ll need to use are:

  • ns1.digitalocean.com
  • ns2.digitalocean.com
  • ns3.digitalocean.com

Note that these changes aren’t immediate. However, you should see a success message and receive an email notification stating the changes have been requested.

I found the AWS doco useful when trying to sort this one out: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-name-servers-glue-records.html#updating-name-servers-other-dns-service

Create a pre-signed upload url for AWS S3 using Golang

Hi everyone,

This is just a quick post on how to create a pre-signed upload url for AWS S3 using Golang.

The generate the presigned url, you’ll need something like the following:

package main

import (
	"fmt"
	"github.com/joho/godotenv"
	log "github.com/sirupsen/logrus"
	"os"
	"time"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3"
)

func main() {

	// Load env vars
	err := godotenv.Load(".env")
	if err != nil {
		log.Fatalf("Error loading .env file: %v", err)
	}

	// Load the bucket name
	s3Bucket := os.Getenv("S3_BUCKET")
	if s3Bucket == "" {
		log.Fatal("an s3 bucket was unable to be loaded from env vars")
	}

	// Prepare the S3 request so a signature can be generated
	svc := s3.New(session.New())
	r, _ := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: aws.String(s3Bucket),
		Key:    aws.String("test-file.jpg"),
	})

	// Create the pre-signed url with an expiry
	url, err := r.Presign(15 * time.Minute)
	if err != nil {
		fmt.Println("Failed to generate a pre-signed url: ", err)
		return
	}

	// Display the pre-signed url
	fmt.Println("Pre-signed URL", url)
}

Note that we’re using godotenv to load AWS environment variables containing a few AWS keys. You can get godotenv by running the following:

go get github.com/joho/godotenv

I then have a file called “.env” sitting in the root of my working directory:

S3_BUCKET=<YOUR_BUCKET_NAME>
AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
AWS_REGION=<YOUR_AWS_REGION>

Once you’ve got all of that setup you can run the script, it should output a link in your console window similar to the following:

Pre-signed URL https://YOUR_BUCKET.s3.YOUR_REGION.amazonaws.com/test-file.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=CREDS&X-Amz-Date=20210717T073809Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=GENERATED_SIG

To test the url you can use something like Postman with the following configuration:

Simply pasting the url into the path should populate the headers for you. As for the body, select “binary” and browse for an image. When you’re ready, click “Send”.

You should get a 200 OK response and now be able to see your uploaded image in your destination bucket. Unless you’ve changed the key it should be under the name “test-file.jpg”.

One of the main advantages of using a pre-signed url is that it allows you to upload images directly to AWS and bypass your backend server completely. You can also use it to sign image retrievals. This allows you to give the links a limited life-span – great for preventing hot-linking.

Thanks to the following GitHub post for pointing me in the right direction: https://github.com/aws/aws-sdk-go/issues/467#issuecomment-171468806

ElasticSearch on AWS – Anonymous is not authorized to perform es:ESHttpGet

Hi everyone,

I am trying out ElasticSearch on AWS and ran into the following error while trying to access the provided Kabana endpoint:

{"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"}

This turned out to be pretty simply, I just needed to whitelist my IP. Go to your search domain in the aws console, click access and finally select “Modify access policy”:

Then all you need to do is add another statement that gives your current ip access:

{ 
"Version": "2012-10-17", 
"Statement": [
{
 "Effect": "Allow",
"Principal": {
 "AWS": "*"
},
"Action": [
"es:ESHttp*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
}
]
}

Feel free to comment below if you hit any issues, but I also found the following links pretty helpful:

https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-ac.html#es-ac-types-ip

https://aws.amazon.com/premiumsupport/knowledge-center/anonymous-not-authorized-elasticsearch/

Cognito Hosted UI User Pool – Google not showing

Hi everyone,

I’m implementing Cognito User Pools for an app and currently adding social providers (Google, Facebook, etc).

The setup process seems pretty straight forward, however the social options did not appear on my hosted ui.

It turned out that I’d missed the last step in the documentation:
– Go to “App Client Settings” (left hand menu under App integration)
– Look for “Enabled Identity Providers” and check any that you want to show

I found this a little unintuitive as I’d expected it to show once it was enabled in the “Identity Providers” section. I probably just need to learn to read ALL of the docs!

hosted-ui.JPG

The full documentation is available here: https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-social-idp.html

And the bulk of the configuration is performed here:
google-congito

Access to fetch from origin has been blocked by CORS policy – AWS SAM Local

Hi everyone,

I’ve been using AWS SAM local lately and ran into a bit of an issue with CORS. It took a looong time to find a solution that worked for all of my local scenarios so hopefully this will be able to help someone else out.

Access to fetch at 'http://127.0.0.1:3000' from origin 'http://localhost:3001' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.

The error above is fairly typical when working with APIs. The request will work from tools such as postman and fiddler but browsers will block it. These are a few good links that explain why CORS is necessary:
https://medium.com/@electra_chong/what-is-cors-what-is-it-used-for-308cafa4df1a
https://stackoverflow.com/a/29167709/522859

As for the solution, add the following to your template.yml:

  PreFlightFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: functions/generic/cors.getPreflightHandler
      Runtime: nodejs8.10
      Events:
        GetEvent:
          Type: Api
          Properties:
            Path: /pages
            Method: options
            RestApiId: !Ref XXXApi
            Auth:
              Authorizer: NONE

If you haven’t already defined your api in your template.yml file there is a default/omitted one created for you. There are a few examples on the AWS github: https://github.com/awslabs/serverless-application-model/blob/release/v1.8.0/examples/2016-10-31/api_cognito_auth/template.yaml

The next thing to do is to create a handler for the options request:


/* Handles retrieving a specific page */
exports.getPreflightHandler = async (event, context, callback) => {
    callback(null, { body: {} }, headers: { 'content-type': 'application/json', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Headers': 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token', 'Access-Control-Allow-Methods': 'OPTIONS,GET,POST,PUT,PATCH,DELETE', 'Access-Control-Allow-Credentials': true }, statusCode: 200 })
};

I’ve moved all mine out to some helper methods but the above should be enough to get it working for you. Hopefully the AWS team will have a simpler solution soon but if you run into any issues in the meantime please let me know!

AWS SAM Request Extremely Slow – Fix

Hi everyone,

I’m currently using AWS SAM CLI with NodeJS and was surprised to find that the requests were significantly slower when run locally. Luckily, I came across a post that suggested adding –skip-pull-image to your start-api command:

sam local start-api --skip-pull-image

This brought my requests down to under a second. Thanks to the following link for the info: https://github.com/awslabs/aws-sam-cli/issues/134#issuecomment-406786704