I normally use DigitalOcean or Azure for docker and kubernetes but have decided to give AWS a go this time around. I was following a guide on deploying an image to a new ECR repo and hit a couple of issues.
The first was that running the login command output help options instead of the password I was expecting:
aws ecr get-login --no-include-email --region us-east-2
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
batch-check-layer-availability | batch-delete-image
batch-get-image | batch-get-repository-scanning-configuration
complete-layer-upload | create-pull-through-cache-rule
create-repository | delete-lifecycle-policy
delete-pull-through-cache-rule | delete-registry-policy
delete-repository | delete-repository-policy
describe-image-replication-status | describe-image-scan-findings
describe-images | describe-pull-through-cache-rules
...
This turned out to be an issue because the command had been deprecated. Instead, use the following:
The second issue I ran into was an error while trying to run the new command:
An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:iam::<ACCOUNT_ID>:user/<USER> is not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken action
Adding the following role to my user resolved the issue: AmazonEC2ContainerRegistryPowerUser
Once I was passed this, I hit another issue using the command from the github link above:
Error response from daemon: login attempt to https://<ACCOUNT_ID>.dkr.ecr.us-east-2.amazonaws.com/v2/ failed with status: 400 Bad Request
This took a bit of digging, but eventually I came across a thread where someone was using the same command and had hit the same issue. Adding the region to the get-login-password call seemed to fix it:
It’s been a long time but I’m messing around with AWS SAM again. I’m currently converting www.testerwidgets.com into an AWS SAM application. As part of this I needed to add a custom domain. Unfortunately, the doco on how to do this isn’t great so I’m going to share what ended up working for me just in case it helps someone else.
To start, these are the resources that you’ll need in your template.yaml:
Resources:
ApiCertificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
ValidationMethod: DNS
ApiGatewayApi:
Type: AWS::Serverless::Api
Properties:
StageName: !Ref Stage
# Allows www.YOUR_DOMAIN.com to call these APIs
# SAM will automatically add AllowMethods with a list of methods for this API
Cors: "'www.YOUR_DOMAIN.com'"
EndpointConfiguration: REGIONAL
Domain:
DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
CertificateArn: !Ref ApiCertificate
Route53:
HostedZoneName: "YOUR_DOMAIN.com." # NOTE: The period at the end is required
You’ll also need to make sure your reference the gateway from your function:
# This lambda function handles is used to test endpoint availability.
getPing:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/get-ping.getPingHandler
Runtime: nodejs14.x
Architectures:
- x86_64
MemorySize: 128
Timeout: 30
Events:
Api:
Type: Api
Properties:
Path: /ping2 # NOTE: AWS overrides the ping command.
Method: GET
RestApiId:
Ref: ApiGatewayApi # NOTE: Make sure you have this referencing correctly.
Description: Responds with 'Pong' if successful.
Now when you run AWS deploy it will continue as usual until it gets to the stage of creating your certificate:
Here it is looking for a specific DNS entry in order to confirm that you own the domain. You’ll need to go into Route53 (or whichever other DNS provider you’re using) and add a CNAME entry with the specified details:
Note that your name and value contents should come from the output of the ApiCertificate job (highlighted in the screenshot above).
Once that’s done you’ll need to wait about sixty seconds for the DNS records to propagate within AWS. You should then be able to access your api using the new domain:
Note that if you get the error above when trying to deploy please ensure that you’ve added the trailing “.” to your Route53 HostedZoneName in the api-gateway in your template.yaml:
Domain:
DomainName: !Sub api-${Stage}.yourdomain.com
CertificateArn: !Ref ApiCertificate
Route53:
HostedZoneName: "your-domain.com." # NOTE: The period at the end is required
If you’re like me and a bit slack with your personal projects you might’ve started receiving the following error today:
admin@Admins-iMac ui % git push
remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead.
remote: Please see https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information.
fatal: unable to access 'https://github.com/Buzzology/referrer.git/': The requested URL returned error: 403
As the message says, Github wants you to start using a Personal Access Token (PAT) instead of password authentication. Luckily, the fix is pretty straight forward – you’ll need to create a Personal Access Token and then update your keychain.
Once you’re on the Personal Access Tokens page you should see something like the following:
Click the Generate new token button, set an expiry and then copy the generated value (you’ll need it in the next step).
Step #2: Updating your keychain
Now that you’ve got your Personal Access Token you need to replace the password that you’ve currently got stored in your keychain. To start, open search and bring up Keychain Access:
If you’ve got quite a few keys there you can filter them by searching for github. You’ll then need to double click on each of the entries and replace the stored password with your personal access token:
Note that you’ll first need to click Show Password.
Now that your keychain is updated, close and then re-open any of your terminals and you should be good to go.
admin@Admins-iMac ui % git push
Enumerating objects: 110, done.
Counting objects: 100% (110/110), done.
Delta compression using up to 4 threads
Compressing objects: 100% (91/91), done.
Writing objects: 100% (93/93), 15.30 KiB | 2.19 MiB/s, done.
Total 93 (delta 64), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (64/64), completed with 14 local objects.
To https://github.com/Buzzology/referrer.git
0d2ecf0..97f2716 master -> master
admin@Admins-iMac ui %
This is a quick post to show how you can add a custom error message when using Vuelidate in Vue3. In my case I have a price field that should not be greater than $1000. The issue is that I store the amount in cents which results in the following error:
This is obviously a little ambiguous to the user who isn’t aware that everything is running in cents behind the scenes. In order to avoid the issue I used a helper in @vuelidate/validates:
// Import the following in your component
import {
required,
maxValue,
helpers, // Helpers provides access to the message customisation
} from "@vuelidate/validators";
...
// Add the following validation rule
priceInCents: {
required,
maxValue: helpers.withMessage("Price cannot be more than $1000.00" , maxValue(100000)),
},
With the new rule in place the error is much more useful:
To test the url you can use something like Postman with the following configuration:
Simply pasting the url into the path should populate the headers for you. As for the body, select “binary” and browse for an image. When you’re ready, click “Send”.
You should get a 200 OK response and now be able to see your uploaded image in your destination bucket. Unless you’ve changed the key it should be under the name “test-file.jpg”.
One of the main advantages of using a pre-signed url is that it allows you to upload images directly to AWS and bypass your backend server completely. You can also use it to sign image retrievals. This allows you to give the links a limited life-span – great for preventing hot-linking.
I’ve recently switched from PostgreSQL and MSSQL to MySQL. I ran into a bit of an issue today where I needed to see the queries I was generating for an insert statement. For MSSQL I’d normally use SQL profiler.
After a bit of Googling I came across the following solution for MySQL:
-- Enable the logging
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';
-- View the results
SELECT *
FROM mysql.general_log
ORDER BY event_time DESC;
Running this in Sequel Pro displays an output similar to the following: