Using Postman with .net gRPC endpoints

Hey everyone,

Just a quick post on how to use postman with a gRPC endpoint using .net core.

Add the grpc reflection package to your project:

dotnet add package Grpc.AspNetCore.Server.Reflection

Add to the container and include the middleware in your program.cs:

builder.Services.AddGrpc();

# Add this line.
builder.Services.AddGrpcReflection();

var app = builder.Build();

app.MapGrpcService<GreeterService>();

# Add this line.
app.MapGrpcReflectionService();

Startup your project and then open postman. Create a new gRPC request by:

  • Clicking file > new
  • Select gRPC Request (currently has a beta tag)
  • Enter your url e.g. localhost:20257
  • Click Using server reflection Refresh

You should now be able to see your gRPC service listed to the right. Click the Invoke button.

Thanks to the following links for the info:

System.IO.IOException with a .NET gRPC Project on Mac

Hey everyone,


I created a new gRPC project with Visual Studio on a Mac and ran into the following error when trying to run it:

System.IO.IOException
...
http/2 over tls is not supported on macos due to missing alpn support

It turns out that Kestrel does not support HTTP/2 with TLS on MacOS. In order to get around it we end up having to configure it to not use TLS.

In program.cs, add the lines below with your HTTP port (e.g. 20257):

var builder = WebApplication.CreateBuilder(args);

builder.WebHost.ConfigureKestrel(options =>
{
    // Setup a HTTP/2 endpoint without TLS.
    options.ListenLocalhost(20257, o => o.Protocols =
        HttpProtocols.Http2);
});

Thanks to the following links for the info:

Errors pushing an image to a new ECR repo on AWS

Hey everyone,

I normally use DigitalOcean or Azure for docker and kubernetes but have decided to give AWS a go this time around. I was following a guide on deploying an image to a new ECR repo and hit a couple of issues.

The first was that running the login command output help options instead of the password I was expecting:

aws ecr get-login --no-include-email --region us-east-2

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

aws: error: argument operation: Invalid choice, valid choices are:

batch-check-layer-availability           | batch-delete-image                      
batch-get-image                          | batch-get-repository-scanning-configuration
complete-layer-upload                    | create-pull-through-cache-rule          
create-repository                        | delete-lifecycle-policy                 
delete-pull-through-cache-rule           | delete-registry-policy                  
delete-repository                        | delete-repository-policy                
describe-image-replication-status        | describe-image-scan-findings            
describe-images                          | describe-pull-through-cache-rules       
...

This turned out to be an issue because the command had been deprecated. Instead, use the following:

aws ecr get-login-password | docker login --username AWS --password-stdin "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.<REGION_ID>.amazonaws.com"

There’s a pretty detailed thread on github here: https://github.com/aws/aws-cli/issues/5014

The second issue I ran into was an error while trying to run the new command:

An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:iam::<ACCOUNT_ID>:user/<USER> is not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken action

Adding the following role to my user resolved the issue: AmazonEC2ContainerRegistryPowerUser

Once I was passed this, I hit another issue using the command from the github link above:

Error response from daemon: login attempt to https://<ACCOUNT_ID>.dkr.ecr.us-east-2.amazonaws.com/v2/ failed with status: 400 Bad Request

This took a bit of digging, but eventually I came across a thread where someone was using the same command and had hit the same issue. Adding the region to the get-login-password call seemed to fix it:

aws ecr get-login-password --region <REGION_ID> | docker login --username AWS --password-stdin "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.<REGION_ID>.amazonaws.com"

I was finally getting a login succeeded message and my push was working. This was the thread mentioning the region id just in case you need a bit more info: https://github.com/aws/aws-cli/issues/5317#issuecomment-835645395

Adding a Custom Domain Name – AWS SAM

Hi everyone,

It’s been a long time but I’m messing around with AWS SAM again. I’m currently converting www.testerwidgets.com into an AWS SAM application. As part of this I needed to add a custom domain. Unfortunately, the doco on how to do this isn’t great so I’m going to share what ended up working for me just in case it helps someone else.

To start, these are the resources that you’ll need in your template.yaml:

Resources:

  ApiCertificate:
    Type: AWS::CertificateManager::Certificate
    Properties:
      DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
      ValidationMethod: DNS

  ApiGatewayApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: !Ref Stage
      # Allows www.YOUR_DOMAIN.com to call these APIs
      # SAM will automatically add AllowMethods with a list of methods for this API
      Cors: "'www.YOUR_DOMAIN.com'"
      EndpointConfiguration: REGIONAL
      Domain:
        DomainName: !Sub api-${Stage}.YOUR_DOMAIN.com
        CertificateArn: !Ref ApiCertificate
        Route53:
          HostedZoneName: "YOUR_DOMAIN.com." # NOTE: The period at the end is required

You’ll also need to make sure your reference the gateway from your function:

  # This lambda function handles is used to test endpoint availability.
  getPing:
    Type: AWS::Serverless::Function
    Properties:
      Handler: src/handlers/get-ping.getPingHandler
      Runtime: nodejs14.x
      Architectures:
        - x86_64
      MemorySize: 128
      Timeout: 30
      Events:
        Api:
          Type: Api
          Properties:
            Path: /ping2 # NOTE: AWS overrides the ping command.
            Method: GET
            RestApiId:
              Ref: ApiGatewayApi # NOTE: Make sure you have this referencing correctly.
      Description: Responds with 'Pong' if successful.

Now when you run AWS deploy it will continue as usual until it gets to the stage of creating your certificate:

Here it is looking for a specific DNS entry in order to confirm that you own the domain. You’ll need to go into Route53 (or whichever other DNS provider you’re using) and add a CNAME entry with the specified details:

Note that your name and value contents should come from the output of the ApiCertificate job (highlighted in the screenshot above).

Once that’s done you’ll need to wait about sixty seconds for the DNS records to propagate within AWS. You should then be able to access your api using the new domain:

Thanks to the follow github post for the pointers in the right direction: https://github.com/aws/aws-sam-cli/issues/2290

aws sam No hosted zones named found

Note that if you get the error above when trying to deploy please ensure that you’ve added the trailing “.” to your Route53 HostedZoneName in the api-gateway in your template.yaml:

Domain:
        DomainName: !Sub api-${Stage}.yourdomain.com
        CertificateArn: !Ref ApiCertificate
        Route53:
          HostedZoneName: "your-domain.com." # NOTE: The period at the end is required

Golang and MySQL – DigitalOcean managed cluster

Hey everyone,

Just sharing a helper function to get you started when trying to connect to a mysql managed cluster on DigitalOcean with Golang.

Before we get into the code you’ll need to grab a couple of things from the database dashboard (on DigitalOcean).

  • Open the databases tab
  • Look for the “Connection Details” section
  • Download your ca cert file
  • Copy down your “public network” settings
    • If you’re moving this into a cluster you can use the “private network” settings instead
// initDb creates initialises the connection to mysql
func initDb(connectionString string, caCertPath string) (*sql.DB, error) {

	log.Infof("initialising db connection")

	// Prepare ssl if required: https://stackoverflow.com/a/54189333/522859
	if caCertPath != "" {

		log.Infof("Loading the ca-cert: %v", caCertPath)

		// Load the CA cert
		certBytes, err := ioutil.ReadFile(caCertPath)
		if err != nil {
			log.Fatal("unable to read in the cert file", err)
		}

		caCertPool := x509.NewCertPool()
		if ok := caCertPool.AppendCertsFromPEM(certBytes); !ok {
			log.Fatal("failed-to-parse-sql-ca", err)
		}

		tlsConfig := &tls.Config{
			InsecureSkipVerify: false,
			RootCAs:            caCertPool,
		}

		mysql.RegisterTLSConfig("bbs-tls", tlsConfig)
	}

	var sqlDb, err = sql.Open("mysql", connectionString)
	if err != nil {
		return nil, fmt.Errorf("failed to connect to the database: %v", err)
	}

	// Ensure that the database can be reached
	err = sqlDb.Ping()
	if err != nil {
		return nil, fmt.Errorf("error on opening database connection: %s", err.Error())
	}

	return sqlDb, nil
}

A couple of things to note in the helper above.

  1. You’ll need to provide the path to your downloaded ca-cert as the second argument
  2. Your connection string will need to look something like the following: USERNAME:PASSWORD@tcp(HOST_NAME:PORT_NUMBER)/DB_NAME

Note that the “tcp(…)” bit is required, see the following post for more info: https://whatibroke.com/2021/11/27/failed-to-connect-to-the-database-default-addr-for-network-unknown-mysql-and-golang/

Depending on which version of the mysql driver you’re using you may also need to revert to the legacy auth mechanism: https://docs.digitalocean.com/products/databases/mysql/resources/troubleshoot-connections/#authentication

No matches for kind “Deployment” in version “apps/v1beta2” – DigitalOcean

Hey everyone,

I’ve been working on a small golang app and decided to try out Kubernetes on DigitalOcean instead of the usual Azure or AWS. In order to get started I followed the tutorial at this link: https://www.digitalocean.com/community/tutorials/webinar-series-a-closer-look-at-kubernetes

I ran into a small issue while trying to using the provided deployment.yaml.

error: unable to recognize "./api/deploy/prod/deployment.yaml": no matches for kind "Deployment" in version "apps/v1beta2"

This turned out to be an issue with the apiVersion being used. Changing the first line to apps/v1 resolved it:

apiVersion: apps/v1beta2
kind: Deployment
metadata:

apiVersion: apps/v1
kind: Deployment
metadata:

Thanks to this link on github for pointing me in the right direction: https://github.com/coreos/etcd-operator/issues/2126

Support for password authentication was removed on August 13, 2021. Please use a personal access token instead – Fix for Mac

Hey everyone,

If you’re like me and a bit slack with your personal projects you might’ve started receiving the following error today:

admin@Admins-iMac ui % git push
remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead.
remote: Please see https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information.
fatal: unable to access 'https://github.com/Buzzology/referrer.git/': The requested URL returned error: 403

As the message says, Github wants you to start using a Personal Access Token (PAT) instead of password authentication. Luckily, the fix is pretty straight forward – you’ll need to create a Personal Access Token and then update your keychain.

Step #1: Create a Personal Access Token (PAT)

To create the personal access token, login to Github and go to the following page: https://github.com/settings/tokens

You can also get to this page via the following:

  1. Click on your profile dropdown
  2. Click settings
  3. Click Develop Settings (on the left)
  4. Click Personal access tokens (on the left)

Once you’re on the Personal Access Tokens page you should see something like the following:

Click the Generate new token button, set an expiry and then copy the generated value (you’ll need it in the next step).

Step #2: Updating your keychain

Now that you’ve got your Personal Access Token you need to replace the password that you’ve currently got stored in your keychain. To start, open search and bring up Keychain Access:

If you’ve got quite a few keys there you can filter them by searching for github. You’ll then need to double click on each of the entries and replace the stored password with your personal access token:

Note that you’ll first need to click Show Password.

Now that your keychain is updated, close and then re-open any of your terminals and you should be good to go.

admin@Admins-iMac ui % git push
Enumerating objects: 110, done.
Counting objects: 100% (110/110), done.
Delta compression using up to 4 threads
Compressing objects: 100% (91/91), done.
Writing objects: 100% (93/93), 15.30 KiB | 2.19 MiB/s, done.
Total 93 (delta 64), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (64/64), completed with 14 local objects.
To https://github.com/Buzzology/referrer.git
   0d2ecf0..97f2716  master -> master
admin@Admins-iMac ui % 

Thanks to the following Stackoverflow post for the additional info: https://stackoverflow.com/a/67650257/522859

Custom Error Message for PriceInCents – Vue3 and Vuelidate

Hey everyone,

This is a quick post to show how you can add a custom error message when using Vuelidate in Vue3. In my case I have a price field that should not be greater than $1000. The issue is that I store the amount in cents which results in the following error:

This is obviously a little ambiguous to the user who isn’t aware that everything is running in cents behind the scenes. In order to avoid the issue I used a helper in @vuelidate/validates:

// Import the following in your component
import {
  required,  
  maxValue,
  helpers, // Helpers provides access to the message customisation
} from "@vuelidate/validators";

...

// Add the following validation rule
priceInCents: {
          required,
          maxValue: helpers.withMessage("Price cannot be more than $1000.00" , maxValue(100000)),
        },

With the new rule in place the error is much more useful:

While it doesn’t seem to show up on Google too well this functionality is documented by Vuelidate on the following page: https://vuelidate-next.netlify.app/custom_validators.html#custom-error-messages

Create a pre-signed upload url for AWS S3 using Golang

Hi everyone,

This is just a quick post on how to create a pre-signed upload url for AWS S3 using Golang.

The generate the presigned url, you’ll need something like the following:

package main

import (
	"fmt"
	"github.com/joho/godotenv"
	log "github.com/sirupsen/logrus"
	"os"
	"time"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3"
)

func main() {

	// Load env vars
	err := godotenv.Load(".env")
	if err != nil {
		log.Fatalf("Error loading .env file: %v", err)
	}

	// Load the bucket name
	s3Bucket := os.Getenv("S3_BUCKET")
	if s3Bucket == "" {
		log.Fatal("an s3 bucket was unable to be loaded from env vars")
	}

	// Prepare the S3 request so a signature can be generated
	svc := s3.New(session.New())
	r, _ := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: aws.String(s3Bucket),
		Key:    aws.String("test-file.jpg"),
	})

	// Create the pre-signed url with an expiry
	url, err := r.Presign(15 * time.Minute)
	if err != nil {
		fmt.Println("Failed to generate a pre-signed url: ", err)
		return
	}

	// Display the pre-signed url
	fmt.Println("Pre-signed URL", url)
}

Note that we’re using godotenv to load AWS environment variables containing a few AWS keys. You can get godotenv by running the following:

go get github.com/joho/godotenv

I then have a file called “.env” sitting in the root of my working directory:

S3_BUCKET=<YOUR_BUCKET_NAME>
AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
AWS_REGION=<YOUR_AWS_REGION>

Once you’ve got all of that setup you can run the script, it should output a link in your console window similar to the following:

Pre-signed URL https://YOUR_BUCKET.s3.YOUR_REGION.amazonaws.com/test-file.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=CREDS&X-Amz-Date=20210717T073809Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=GENERATED_SIG

To test the url you can use something like Postman with the following configuration:

Simply pasting the url into the path should populate the headers for you. As for the body, select “binary” and browse for an image. When you’re ready, click “Send”.

You should get a 200 OK response and now be able to see your uploaded image in your destination bucket. Unless you’ve changed the key it should be under the name “test-file.jpg”.

One of the main advantages of using a pre-signed url is that it allows you to upload images directly to AWS and bypass your backend server completely. You can also use it to sign image retrievals. This allows you to give the links a limited life-span – great for preventing hot-linking.

Thanks to the following GitHub post for pointing me in the right direction: https://github.com/aws/aws-sdk-go/issues/467#issuecomment-171468806

Viewing SQL Logs in MySql

Hi everyone,

I’ve recently switched from PostgreSQL and MSSQL to MySQL. I ran into a bit of an issue today where I needed to see the queries I was generating for an insert statement. For MSSQL I’d normally use SQL profiler.

After a bit of Googling I came across the following solution for MySQL:

-- Enable the logging
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';

-- View the results
SELECT *
FROM mysql.general_log
ORDER BY event_time DESC;

Running this in Sequel Pro displays an output similar to the following:

Sequel Pro screenshot of logging output for MySQL

Thanks to the following StackOverflow post for the info: https://stackoverflow.com/a/678310/522859