Support for password authentication was removed on August 13, 2021. Please use a personal access token instead – Fix for Mac

Hey everyone,

If you’re like me and a bit slack with your personal projects you might’ve started receiving the following error today:

admin@Admins-iMac ui % git push
remote: Support for password authentication was removed on August 13, 2021. Please use a personal access token instead.
remote: Please see https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/ for more information.
fatal: unable to access 'https://github.com/Buzzology/referrer.git/': The requested URL returned error: 403

As the message says, Github wants you to start using a Personal Access Token (PAT) instead of password authentication. Luckily, the fix is pretty straight forward – you’ll need to create a Personal Access Token and then update your keychain.

Step #1: Create a Personal Access Token (PAT)

To create the personal access token, login to Github and go to the following page: https://github.com/settings/tokens

You can also get to this page via the following:

  1. Click on your profile dropdown
  2. Click settings
  3. Click Develop Settings (on the left)
  4. Click Personal access tokens (on the left)

Once you’re on the Personal Access Tokens page you should see something like the following:

Click the Generate new token button, set an expiry and then copy the generated value (you’ll need it in the next step).

Step #2: Updating your keychain

Now that you’ve got your Personal Access Token you need to replace the password that you’ve currently got stored in your keychain. To start, open search and bring up Keychain Access:

If you’ve got quite a few keys there you can filter them by searching for github. You’ll then need to double click on each of the entries and replace the stored password with your personal access token:

Note that you’ll first need to click Show Password.

Now that your keychain is updated, close and then re-open any of your terminals and you should be good to go.

admin@Admins-iMac ui % git push
Enumerating objects: 110, done.
Counting objects: 100% (110/110), done.
Delta compression using up to 4 threads
Compressing objects: 100% (91/91), done.
Writing objects: 100% (93/93), 15.30 KiB | 2.19 MiB/s, done.
Total 93 (delta 64), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (64/64), completed with 14 local objects.
To https://github.com/Buzzology/referrer.git
   0d2ecf0..97f2716  master -> master
admin@Admins-iMac ui % 

Thanks to the following Stackoverflow post for the additional info: https://stackoverflow.com/a/67650257/522859

Custom Error Message for PriceInCents – Vue3 and Vuelidate

Hey everyone,

This is a quick post to show how you can add a custom error message when using Vuelidate in Vue3. In my case I have a price field that should not be greater than $1000. The issue is that I store the amount in cents which results in the following error:

This is obviously a little ambiguous to the user who isn’t aware that everything is running in cents behind the scenes. In order to avoid the issue I used a helper in @vuelidate/validates:

// Import the following in your component
import {
  required,  
  maxValue,
  helpers, // Helpers provides access to the message customisation
} from "@vuelidate/validators";

...

// Add the following validation rule
priceInCents: {
          required,
          maxValue: helpers.withMessage("Price cannot be more than $1000.00" , maxValue(100000)),
        },

With the new rule in place the error is much more useful:

While it doesn’t seem to show up on Google too well this functionality is documented by Vuelidate on the following page: https://vuelidate-next.netlify.app/custom_validators.html#custom-error-messages

Create a pre-signed upload url for AWS S3 using Golang

Hi everyone,

This is just a quick post on how to create a pre-signed upload url for AWS S3 using Golang.

The generate the presigned url, you’ll need something like the following:

package main

import (
	"fmt"
	"github.com/joho/godotenv"
	log "github.com/sirupsen/logrus"
	"os"
	"time"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3"
)

func main() {

	// Load env vars
	err := godotenv.Load(".env")
	if err != nil {
		log.Fatalf("Error loading .env file: %v", err)
	}

	// Load the bucket name
	s3Bucket := os.Getenv("S3_BUCKET")
	if s3Bucket == "" {
		log.Fatal("an s3 bucket was unable to be loaded from env vars")
	}

	// Prepare the S3 request so a signature can be generated
	svc := s3.New(session.New())
	r, _ := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: aws.String(s3Bucket),
		Key:    aws.String("test-file.jpg"),
	})

	// Create the pre-signed url with an expiry
	url, err := r.Presign(15 * time.Minute)
	if err != nil {
		fmt.Println("Failed to generate a pre-signed url: ", err)
		return
	}

	// Display the pre-signed url
	fmt.Println("Pre-signed URL", url)
}

Note that we’re using godotenv to load AWS environment variables containing a few AWS keys. You can get godotenv by running the following:

go get github.com/joho/godotenv

I then have a file called “.env” sitting in the root of my working directory:

S3_BUCKET=<YOUR_BUCKET_NAME>
AWS_ACCESS_KEY_ID=<YOUR_AWS_ACCESS_KEY_ID>
AWS_SECRET_ACCESS_KEY=<YOUR_AWS_SECRET_ACCESS_KEY>
AWS_REGION=<YOUR_AWS_REGION>

Once you’ve got all of that setup you can run the script, it should output a link in your console window similar to the following:

Pre-signed URL https://YOUR_BUCKET.s3.YOUR_REGION.amazonaws.com/test-file.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=CREDS&X-Amz-Date=20210717T073809Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=GENERATED_SIG

To test the url you can use something like Postman with the following configuration:

Simply pasting the url into the path should populate the headers for you. As for the body, select “binary” and browse for an image. When you’re ready, click “Send”.

You should get a 200 OK response and now be able to see your uploaded image in your destination bucket. Unless you’ve changed the key it should be under the name “test-file.jpg”.

One of the main advantages of using a pre-signed url is that it allows you to upload images directly to AWS and bypass your backend server completely. You can also use it to sign image retrievals. This allows you to give the links a limited life-span – great for preventing hot-linking.

Thanks to the following GitHub post for pointing me in the right direction: https://github.com/aws/aws-sdk-go/issues/467#issuecomment-171468806

Viewing SQL Logs in MySql

Hi everyone,

I’ve recently switched from PostgreSQL and MSSQL to MySQL. I ran into a bit of an issue today where I needed to see the queries I was generating for an insert statement. For MSSQL I’d normally use SQL profiler.

After a bit of Googling I came across the following solution for MySQL:

-- Enable the logging
SET GLOBAL log_output = 'TABLE';
SET GLOBAL general_log = 'ON';

-- View the results
SELECT *
FROM mysql.general_log

Running this in Sequel Pro displays an output similar to the following:

Sequel Pro screenshot of logging output for MySQL

Thanks to the following StackOverflow post for the info: https://stackoverflow.com/a/678310/522859

Avoiding “declared but not used” in Golang while testing

Hi everyone,

I come from a mostly JavaScript and .NET background and am currently transitioning to Golang. I’m really liking the language and ecosystem so far but one thing that has irked me a little is that I can’t leave variables as placeholders or even for debugging.

var placeholder1 [][]string
var test1 string

Something as simple as the above results in two compiler errors stating that the variables are declared but not used`.

A bit of Googling showed quite a few people with similar complaints, but there was one solution that seemed to fit my use-case pretty well:

var placeholder1 [][]string
var test1 string

// Use discards
_, _ = placeholder1, test1

By simply using a discard the compilation error can be circumvented. There’s a fairly detailed discussion on Stackoverflow: https://stackoverflow.com/a/62951339/522859

Goland Unable to Resolve Directory – Modules not loading (Mac)

Hi everyone,

I’m trying out Goland on an old Mac and hit a bit of an issue where it was unable to resolve most of my modules. The weird thing was that if I ran the project from a terminal it was fine so this seemed to rule out any path issues.

This ended up being a few IDE settings – double check the following:

  • Click “Goland” in the top left
  • Click “Preferences”
  • Expand “Go” in the sidebar
  • Click “Go Modules”
    • Check “Enable Go modules integration”
  • Click “GO PATH” in the sidebar
    • Ensure that there are zero entries under the “Project GOPATH” and “Module GOPATH” headings
    • Ensure that “Use GOPATH that’s defined in system environment” is checked
    • Ensure that “Index entire GOPATH” is checked
  • Click “Apply” and “OK”

You may need to wait a short while for that indexing to refresh. If it doesn’t look like it’s doing anything run “go mod download” from your terminal. If it’s still broken restarting Goland seems to fix it up sometimes as well.

“‘protoc-gen-go’ is not recognized as an internal or external command” – Go

Hi everyone,

I’m currently looking into Go and have hit the following error while trying to run proto:

`protoc-gen-go` is not recognized as an internal or external command

I am using Windows and have installed the required libraries:

go get -u github.com/golang/protobuf/proto
go get -u github.com/golang/protobuf/protoc-gen-go

In my case the issue turned out to be that proto-gen hadn’t been added to my path: E:\repos\gocode\bin

Note that the new path won’t be available until you restart your terminal.

A subsequent error I encountered was the following:

'protoc-gen-go-grpc' is not recognized as an internal or external command,
operable program or batch file.

I just had to run the following to resolve it:

go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest

Azure CDN not updating after deployment – Azure

Hey everyone,

I’ve setup a deployment pipeline for the JellyWidgets.com react app that I’m currently messing around on. Unfortunately, while the deployment appeared to be successful it the CDN wasn’t updating.

This turned out to be a fairly simply fix – I needed to purge the cache after each deployment. This can be done manually using the Azure portal:

Simply click the purge button on your CDN’s profile page. For a more permanent fix you can also setup a pipeline step like the following:

      - task: AzureCLI@2
        displayName: 'Purge the CDN.'
        inputs:
          azureSubscription: $(azureSubscription)
          scriptType: 'pscore'
          scriptLocation: 'inlineScript'
          inlineScript: 'az cdn endpoint purge --resource-group widgets-prod --name $(resourceGroup) --profile-name JellyWidgets --content-paths "/*" --no-wait'
          workingDirectory: '$(Build.SourcesDirectory)/UI/infrastructure'
          failOnStandardError: true

Note that if you don’t add the “no-wait” step it can take a long time for the purge to complete.

See the following urls for more info:

The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell. – AzureFileCopy@4

Hi everyone,

I am currently setting up an azure pipeline to deploy a ReactApp to a Blob storage container. I was a bit surprised when I hit the following error with an Ubuntu pool:

Task         : Azure file copy
Description  : Copy files to Azure Blob Storage or virtual machines
Version      : 4.184.1
Author       : Microsoft Corporation
Help         : https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-file-copy
==============================================================================
##[error]The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell.
##[debug]System.InvalidOperationException: The current operating system is not capable of running this task. That typically means the task was written for Windows only. For example, written for Windows Desktop PowerShell.
   at Microsoft.VisualStudio.Services.Agent.Worker.TaskRunner.RunAsync()
   at Microsoft.VisualStudio.Services.Agent.Worker.StepsRunner.RunStepAsync(IStep step, CancellationToken jobCancellationToken)

I did a quick search in the task lists and couldn’t find anything specifically for Linux. A bit of Googling brought up an open Github thread stating that there currently isn’t a built in task to address this issue.

The next best alternative is to use the AzureCli task and invoke it that way:

- task: AzureCLI@1
    displayName: Deploy the UI
    inputs:
      azureSubscription: $(azureSubscription)
      scriptLocation: inlineScript
      inlineScript: |
        az storage blob upload-batch \
          --destination \$web \
          --account-name "$(storageAccountName)" \
          --source "$(Agent.BuildDirectory)/$(outputDir)"

I’ve linked the GitHub issue below. It has a few alternatives to this task as well so worth reading if you’re having issues: https://github.com/microsoft/azure-pipelines-tasks/issues/8920#issuecomment-640596461

AKS Template for Azure Bicep

Hi everyone,

I’ve added AKS to my Azure environment today using Bicep. This is the Bicep template I ended up using in case it’s useful:

@description('The name of the Managed Cluster resource.')
param clusterName string

@description('The location of the Managed Cluster resource.')
param location string = resourceGroup().location

@description('Optional DNS prefix to use with hosted Kubernetes API server FQDN.')
param dnsPrefix string

@minValue(0)
@maxValue(1023)
@description('Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize.')
param osDiskSizeGB int = 0

@minValue(1)
@maxValue(50)
@description('The number of nodes for the cluster.')
param agentCount int = 1

@description('The size of the Virtual Machine. Currently the cheapest is Standard_B2s: https://azureprice.net/?currency=AUD®ion=eastus2')
param agentVMSize string = 'Standard_B2s'

@description('User name for the Linux Virtual Machines.')
param linuxAdminUsername string

@description('Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example \'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm\'')
param sshRSAPublicKey string

@allowed([
  'Linux'
])
@description('The type of operating system.')
param osType string = 'Linux'

resource clusterName_resource 'Microsoft.ContainerService/managedClusters@2020-03-01' = {
  name: clusterName
  location: location
  properties: {
    dnsPrefix: dnsPrefix
    agentPoolProfiles: [
      {
        name: 'agentpool'
        osDiskSizeGB: osDiskSizeGB
        count: agentCount
        vmSize: agentVMSize
        osType: osType
        type: 'VirtualMachineScaleSets'
        mode: 'System' // https://github.com/Azure/AKS/issues/1568#issuecomment-619642862
      }
    ]
    linuxProfile: {
      adminUsername: linuxAdminUsername
      ssh: {
        publicKeys: [
          {
            keyData: sshRSAPublicKey
          }
        ]
      }
    }
  }
  identity: {
    type: 'SystemAssigned'
  }
}

output controlPlaneFQDN string = reference(clusterName).fqdn

See this link for more info on what each of the parameters do: https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-rm-template

One thing to note is that reversing this template didn’t immediately work. I had to add the node pool with a mode of “system”. See the following github thread for more info: https://github.com/Azure/AKS/issues/1568#issuecomment-619642862