I ran into a bit of an issue today while creating a certificate with CloudFormation. After kicking the stack off it ended up hanging on a step to create a domain verification entry in Route 53.
I had used this script multiple times for creating a certificate for a subdomain, but this time I included an apex domain as well. In order to narrow things down a little further I checked out the certificate via the console:
While the subdomain had passed the apex domain was still sitting in pending. Surprisingly, in Route53 the record DID exist. In order to get things moving again I manually deleted the record and then clicked “Create records in Route 53”.
This re-created the record I’d just deleted, and after a couple of minutes the domain validation passed and then the certificate was created:
This was a bit of a weird one that I have been unable to reproduce. I’m not certain why the DNS validation ended up hanging but retriggering the process seems to have resolved it.
Note that there are other legitimate reasons why your deployment might be hanging at this step:
When you use the AWS::CertificateManager::Certificate resource in a CloudFormation stack, domain validation is handled automatically if all three of the following are true: The certificate domain is hosted in Amazon Route 53, the domain resides in your AWS account, and you are using DNS validation.
However, if the certificate uses email validation, or if the domain is not hosted in Route 53, then the stack will remain in the CREATE_IN_PROGRESS state. Further stack operations are delayed until you validate the certificate request, either by acting upon the instructions in the validation email, or by adding a CNAME record to your DNS configuration. For more information, see Option 1: DNS Validation and Option 2: Email Validation.
It’s been a long time but I’m messing around with AWS SAM again. I’m currently converting www.testerwidgets.com into an AWS SAM application. As part of this I needed to add a custom domain. Unfortunately, the doco on how to do this isn’t great so I’m going to share what ended up working for me just in case it helps someone else.
To start, these are the resources that you’ll need in your template.yaml:
Resources:
ApiCertificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
ValidationMethod: DNS
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
StageName: !Ref Stage
# Allows www.YOUR_DOMAIN.com to call these APIs
# SAM will automatically add AllowMethods with a list of methods for this API
Cors: "'www.YOUR_DOMAIN.com'"
EndpointConfiguration: REGIONAL
Domain:
DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
CertificateArn: !Ref ApiCertificate
Route53:
HostedZoneName: "YOUR_DOMAIN.com." # NOTE: The period at the end is required
You’ll also need to make sure your reference the gateway from your function:
# This lambda function handles is used to test endpoint availability.
getPing:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/get-ping.getPingHandler
Runtime: nodejs14.x
Architectures:
- x86_64
MemorySize: 128
Timeout: 30
Events:
Api:
Type: Api
Properties:
Path: /ping2 # NOTE: AWS overrides the ping command.
Method: GET
RestApiId:
Ref: ApiGatewayApi # NOTE: Make sure you have this referencing correctly.
Description: Responds with 'Pong' if successful.
Now when you run AWS deploy it will continue as usual until it gets to the stage of creating your certificate:
Here it is looking for a specific DNS entry in order to confirm that you own the domain. You’ll need to go into Route53 (or whichever other DNS provider you’re using) and add a CNAME entry with the specified details:
Note that your name and value contents should come from the output of the ApiCertificate job (highlighted in the screenshot above).
Once that’s done you’ll need to wait about sixty seconds for the DNS records to propagate within AWS. You should then be able to access your api using the new domain:
Note that if you get the error above when trying to deploy please ensure that you’ve added the trailing “.” to your Route53 HostedZoneName in the api-gateway in your template.yaml:
Domain:
DomainName: !Sub api-${Stage}.yourdomain.com
CertificateArn: !Ref ApiCertificate
Route53:
HostedZoneName: "your-domain.com." # NOTE: The period at the end is required
I’m currently setting up a mysql database on DigitalOcean and hit the following error when connecting:
failed to connect to the database: default addr for "DATABASE_CONN_STR" network unknown
Luckily this turned out to be a pretty easy fix. In the mysql driver repo you can see that the only scenario where this error is shown is when the network doesn’t match “tcp” or “unix”.
// Set default network if empty
if cfg.Net == "" {
cfg.Net = "tcp"
}
// Set default address if empty
if cfg.Addr == "" {
switch cfg.Net {
case "tcp":
cfg.Addr = "127.0.0.1:3306"
case "unix":
cfg.Addr = "/tmp/mysql.sock"
default:
return errors.New("default addr for network '" + cfg.Net + "' unknown")
}
} else if cfg.Net == "tcp" {
cfg.Addr = ensureHavePort(cfg.Addr)
}
To fix it, all that was required was to wrap part of the connection string in tcp or unix.
Note that the host name and port on the second line is now wrapped in “tcp(…)”. In my case I didn’t have either set so I find it a bit strange that the “set default address if empty” check was triggered.
Thanks to this stackoverflow post and github link for the info:
This is a quick post on how to point your AWS Route53 domain to DigitalOcean. I’m currently messing around with Kubernetes on DigitalOcean (DOKS) and want to use their name servers to nginx.
Then click on your domain and under name servers click “Add or edit name servers”:
Replace the existing aws nameservers with the digital ocean ones and then click update:
The values that you’ll need to use are:
ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com
Note that these changes aren’t immediate. However, you should see a success message and receive an email notification stating the changes have been requested.
If you’re like me and a bit slack with your personal projects you might’ve started receiving the following error today:
admin@Admins-iMac ui % git push
remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead.
remote: Please see https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information.
fatal: unable to access 'https://github.com/Buzzology/referrer.git/': The requested URL returned error: 403
As the message says, Github wants you to start using a Personal Access Token (PAT) instead of password authentication. Luckily, the fix is pretty straight forward – you’ll need to create a Personal Access Token and then update your keychain.
Once you’re on the Personal Access Tokens page you should see something like the following:
Click the Generate new token button, set an expiry and then copy the generated value (you’ll need it in the next step).
Step #2: Updating your keychain
Now that you’ve got your Personal Access Token you need to replace the password that you’ve currently got stored in your keychain. To start, open search and bring up Keychain Access:
If you’ve got quite a few keys there you can filter them by searching for github. You’ll then need to double click on each of the entries and replace the stored password with your personal access token:
Note that you’ll first need to click Show Password.
Now that your keychain is updated, close and then re-open any of your terminals and you should be good to go.
admin@Admins-iMac ui % git push
Enumerating objects: 110, done.
Counting objects: 100% (110/110), done.
Delta compression using up to 4 threads
Compressing objects: 100% (91/91), done.
Writing objects: 100% (93/93), 15.30 KiB | 2.19 MiB/s, done.
Total 93 (delta 64), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (64/64), completed with 14 local objects.
To https://github.com/Buzzology/referrer.git
0d2ecf0..97f2716 master -> master
admin@Admins-iMac ui %
This is a quick post to show how you can add a custom error message when using Vuelidate in Vue3. In my case I have a price field that should not be greater than $1000. The issue is that I store the amount in cents which results in the following error:
This is obviously a little ambiguous to the user who isn’t aware that everything is running in cents behind the scenes. In order to avoid the issue I used a helper in @vuelidate/validates:
// Import the following in your component
import {
required,
maxValue,
helpers, // Helpers provides access to the message customisation
} from "@vuelidate/validators";
...
// Add the following validation rule
priceInCents: {
required,
maxValue: helpers.withMessage("Price cannot be more than $1000.00" , maxValue(100000)),
},
With the new rule in place the error is much more useful:
To test the url you can use something like Postman with the following configuration:
Simply pasting the url into the path should populate the headers for you. As for the body, select “binary” and browse for an image. When you’re ready, click “Send”.
You should get a 200 OK response and now be able to see your uploaded image in your destination bucket. Unless you’ve changed the key it should be under the name “test-file.jpg”.
One of the main advantages of using a pre-signed url is that it allows you to upload images directly to AWS and bypass your backend server completely. You can also use it to sign image retrievals. This allows you to give the links a limited life-span – great for preventing hot-linking.