Author Archives: Chris Owens

libgl error pic id driver null – ros2 turtlesim

Hi everyone,

I’m currently following the ros2 turtlesim tutorial with Ubuntu on an M1 Mac with Parallels and Ubuntu. Unfortunately, I hit a ‘device not found’ when trying to start the control node.

A bit of Googling revealed that 3D acceleration might not be enabled. To fix this, all you need to do is the following:

  • Open settings in Parallels (the cog icon)
  • Click the Hardware tab up the top
  • Click Graphics on the left hand side
  • Click Advanced
  • Tick Enable 3D Acceleration

You’ll need to restart the VM, but once that’s done the libgl error should be resolved!

If this doesn’t work, there are a few other things you can check. First, double check that you’ve installed (or re-installed) Parallels Tools.

I have also found that the 3D Acceleration seems to randomly break or reset itself. If it hasn’t automatically unchecked itself, I’ve occasionally had to do the following to get it working again:

  • Stop the VM
  • Disable 3D Acceleration
  • Start the VM
  • Stop the VM
  • Enable 3D Acceleration
  • Start the VM

Another options mentioned in this thread is to set the following environment variable before starting rviz or gazebo:


The Repository is not Signed – Ubuntu ros2 installation

Hi everyone,

I’m currently installing ros2 on Ubuntu (with Parallels) and ran into the following error:

Hit:1 jammy InRelease
Hit:2 jammy-security InRelease
Get:3 jammy InRelease [4,673 B]
Hit:4 jammy-updates InRelease
Err:3 jammy InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F42ED6FBAB17C654
Get:5 jammy-proposed InRelease [270 kB]
Hit:6 jammy-backports InRelease
Reading package lists… Done
W: GPG error: jammy InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F42ED6FBAB17C654
E: The repository ' jammy InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

I was following the official tutorial so I was little surprised to see this crop up. What I hadn’t realised was that I’d missed an error in the output dump while running sudo apt update:

chris@chris-parallels-ubuntu:~$ sudo apt update && sudo apt install curl gnupg lsb-release
sudo curl -sSL -o /usr/share/keyrings/ros-archive-keyring.gpg
Hit:1 jammy InRelease
Hit:2 jammy-security InRelease
Get:3 jammy InRelease [4,673 B]
Err:3 jammy InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F42ED6FBAB17C654
Hit:4 jammy-updates InRelease
Hit:5 jammy-proposed InRelease
Hit:6 jammy-backports InRelease
Reading package lists… Done
W: GPG error: jammy InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY F42ED6FBAB17C654
E: The repository ' jammy InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
sudo: curl: command not found

This was a clean install of Ubuntu and I hadn’t yet installed curl. Luckily, this is a simple fix:

sudo apt update
sudo apt install curl

Searching for a new site idea on Reddit!


A bit of a different sort of post this time. I’ve been looking for a new side project for a couple of weeks after finally deciding to shelve the one I’ve been working on for the last six months or so.

I had intended to build a small scraping bot as a subscription service for recruiters but soon realised that this was a pretty typical (and bad) pattern for me. I enjoy the building phase so I tend to build apps without verifying the market. I then dread engaging an unknown market and move onto something else.

I decided I would at least try to do things properly this time. Instead of building an idea of my own I decided to find a business and ask them if they had any issues that were big enough that would pay a monthly subscription to fix.

My first point of contact was an old colleague that had hired me to do some freelance work a number of times. They were keen, but unfortunately they would be pretty tied up with another project for at least the next few months.

At this point I almost went back to the recruiting idea until thinking of an approach that was a little different and definitely outside my comfort zone…Reddit?

I searched a few entrepreneur subreddits but they didn’t seem to be quite right. I then came across which looked a bit more promising. I had hoped to find an Australian version so that I could work with someone a bit closer to home but there didn’t seem to be a decent equivalent. I did a bit more browsing and eventually decided that /r/smallbusiness was probably a pretty good starting point.

The Reddit Post

Posting to social media is definitely not something that I normally do. I occasionally comment, but for the most part I’m pretty passive. This is the post I ended up putting on the site:

I spent a bit of time thinking about how to approach this and decided that there were a few key things I needed for a businesses problem to be suitable.

Should be a subscription service

I have spent a lot of time building market places and platform services. Unfortunately these tend to take a long time to develop and accepting payments on behalf of another business can get complicated and potentially risky. Communicating these requirements via Reddit would be challenging at best.

A simple subscription service using stripe will significantly cut my development time. Building a product that charges a monthly subscription also gives me an easy way to evaluate how much a solution is worth to a customer. If they’re not willing to pay a subscription I immediately know that I need to look for another problem. I decided that $10/month was probably a good figure to start on. If they were willing to pay that I would consider the project, if not I’d move onto the next.

The scope should be small and easily defined

The proposed problem needs to have clear and easily defined requirements. It’s quite likely that I will only have a few short comments to get ALL of my information. I need to be able to understand it quickly and to be able to develop an MVP without any additional input from the user.

Preferably a B2B problem, not B2C

This one is more of a personal preference and definitely not firm but I find it’s easier to work with a business than a customer and there’s generally a bit less competition.

The Comments

I had half expected for my post to get removed and skim reading the very first comment definitely gave me a bit of a scare:

But luckily my post stayed up and it was only a short while later that the first business problem was posted:

I was pretty happy with this one. I have previously built a few booking systems for driving schools so I knew this was something I’d be able to do. Unfortunately, a few comments later confirmed my sneaking suspicion that this would be a fairly saturated market:

I was definitely still open to the idea, but decided to wait and see if there were any more ideas that cropped up.

It took a little while, but eventually there was another suggestion that piqued my interested:

It took a couple of comments to sort out exactly what was being requested for this one:

There were a few important bits to this one that put it at the top of my list. Firstly, the problem is well defined and there’s definitely room for an MVP with potential to extend the scope if things go well.

The problem is also something that I can see appealing to other businesses that are just chasing a bare bones asset management system. A fully fledged system is complex and requires a high price tag in order to justify the development cost. This was much smaller and definitely something that a single developer could pull off.

The third, and probably most important factor is that the seller would be elated to be able to find a solution for $25/month. The number here is important because I had only put forward a $10/month price – they were definitely keen to find a solution.

The snag

I worked on a few quick mocks and shared these with the potential customer. She seemed pretty happy and had a few suggestions around including imports/exports and a number of additional fields.

At this point I was pretty happy with things and decided to do a bit of competitor analysis. Unfortunately, this is when I came across AssetTiger:

They had literally everything that the customer was looking for and well within their budget of $25 per month. I double checked with them to confirm my suspicions and unfortunately it looked like AssetTiger was exactly what they were after:

They were were pretty appreciative and even offered to send a donation which was a really generous offer. Unfortunately, I’d now exhausted all of the project suggestions on the post.

What next?

I really liked the quick feedback on the suggestions in /r/smallbusiness and have decided that I will try this on a few more subreddits to see if the approach will work.

One thing that I would like to do a little differently is to try targeting an underserved industry specific subreddit e.g. agriculture or weddings. I think that this might help to mitigate the issue of suggestions that already exists – other redditors are likely to chip in with solutions that they use. Because everyone in the subreddit is in the same industry I think there’s also good chance that it will allow me to identify if multiple people are affected by the problem.

Anyway, if you’ve made it this far – thanks for reading! I’ll post an update on how the next reddit posts go!

Errors pushing an image to a new ECR repo on AWS

Hey everyone,

I normally use DigitalOcean or Azure for docker and kubernetes but have decided to give AWS a go this time around. I was following a guide on deploying an image to a new ECR repo and hit a couple of issues.

The first was that running the login command output help options instead of the password I was expecting:

aws ecr get-login --no-include-email --region us-east-2

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:

  aws help
  aws <command> help
  aws <command> <subcommand> help

aws: error: argument operation: Invalid choice, valid choices are:

batch-check-layer-availability           | batch-delete-image                      
batch-get-image                          | batch-get-repository-scanning-configuration
complete-layer-upload                    | create-pull-through-cache-rule          
create-repository                        | delete-lifecycle-policy                 
delete-pull-through-cache-rule           | delete-registry-policy                  
delete-repository                        | delete-repository-policy                
describe-image-replication-status        | describe-image-scan-findings            
describe-images                          | describe-pull-through-cache-rules       

This turned out to be an issue because the command had been deprecated. Instead, use the following:

aws ecr get-login-password | docker login --username AWS --password-stdin "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.<REGION_ID>"

There’s a pretty detailed thread on github here:

The second issue I ran into was an error while trying to run the new command:

An error occurred (AccessDeniedException) when calling the GetAuthorizationToken operation: User: arn:aws:iam::<ACCOUNT_ID>:user/<USER> is not authorized to perform: ecr:GetAuthorizationToken on resource: * because no identity-based policy allows the ecr:GetAuthorizationToken action

Adding the following role to my user resolved the issue: AmazonEC2ContainerRegistryPowerUser

Once I was passed this, I hit another issue using the command from the github link above:

Error response from daemon: login attempt to https://<ACCOUNT_ID> failed with status: 400 Bad Request

This took a bit of digging, but eventually I came across a thread where someone was using the same command and had hit the same issue. Adding the region to the get-login-password call seemed to fix it:

aws ecr get-login-password --region <REGION_ID> | docker login --username AWS --password-stdin "$(aws sts get-caller-identity --query Account --output text).dkr.ecr.<REGION_ID>"

I was finally getting a login succeeded message and my push was working. This was the thread mentioning the region id just in case you need a bit more info:

Console.log output not appearing – AWS SAM Node.js

Hi everyone,

I ran into a bit of an interesting issue today after updating AWS SAM. All of my node log output stopped appearing in my console locally.

For now, there’s a pretty simple workaround:

sam build --use-container && sam local start-api 2>&1 | tr "\r" "\n"

Append 2>&1 | tr “\r” “\n” (including quotes) to your start-api command and you should begin to see the output as expected:

Thanks to the following links for the info:

CREATE_IN_PROGRESS when creating a certificate with CloudFormation

Hi everyone,

I ran into a bit of an issue today while creating a certificate with CloudFormation. After kicking the stack off it ended up hanging on a step to create a domain verification entry in Route 53.

I had used this script multiple times for creating a certificate for a subdomain, but this time I included an apex domain as well. In order to narrow things down a little further I checked out the certificate via the console:

While the subdomain had passed the apex domain was still sitting in pending. Surprisingly, in Route53 the record DID exist. In order to get things moving again I manually deleted the record and then clicked “Create records in Route 53”.

This re-created the record I’d just deleted, and after a couple of minutes the domain validation passed and then the certificate was created:

This was a bit of a weird one that I have been unable to reproduce. I’m not certain why the DNS validation ended up hanging but retriggering the process seems to have resolved it.

Note that there are other legitimate reasons why your deployment might be hanging at this step:

When you use the AWS::CertificateManager::Certificate resource in a CloudFormation stack, domain validation is handled automatically if all three of the following are true: The certificate domain is hosted in Amazon Route 53, the domain resides in your AWS account, and you are using DNS validation.

However, if the certificate uses email validation, or if the domain is not hosted in Route 53, then the stack will remain in the CREATE_IN_PROGRESS state. Further stack operations are delayed until you validate the certificate request, either by acting upon the instructions in the validation email, or by adding a CNAME record to your DNS configuration. For more information, see Option 1: DNS Validation and Option 2: Email Validation.

Adding a Custom Domain Name – AWS SAM

Hi everyone,

It’s been a long time but I’m messing around with AWS SAM again. I’m currently converting into an AWS SAM application. As part of this I needed to add a custom domain. Unfortunately, the doco on how to do this isn’t great so I’m going to share what ended up working for me just in case it helps someone else.

To start, these are the resources that you’ll need in your template.yaml:


    Type: AWS::CertificateManager::Certificate
      DomainName: !Sub api-${Stage}
      ValidationMethod: DNS

    Type: AWS::Serverless::Api
      StageName: !Ref Stage
      # Allows to call these APIs
      # SAM will automatically add AllowMethods with a list of methods for this API
      Cors: "''"
      EndpointConfiguration: REGIONAL
        DomainName: !Sub api-${Stage}
        CertificateArn: !Ref ApiCertificate
          HostedZoneName: "" # NOTE: The period at the end is required

You’ll also need to make sure your reference the gateway from your function:

  # This lambda function handles is used to test endpoint availability.
    Type: AWS::Serverless::Function
      Handler: src/handlers/get-ping.getPingHandler
      Runtime: nodejs14.x
        - x86_64
      MemorySize: 128
      Timeout: 30
          Type: Api
            Path: /ping2 # NOTE: AWS overrides the ping command.
            Method: GET
              Ref: ApiGatewayApi # NOTE: Make sure you have this referencing correctly.
      Description: Responds with 'Pong' if successful.

Now when you run AWS deploy it will continue as usual until it gets to the stage of creating your certificate:

Here it is looking for a specific DNS entry in order to confirm that you own the domain. You’ll need to go into Route53 (or whichever other DNS provider you’re using) and add a CNAME entry with the specified details:

Note that your name and value contents should come from the output of the ApiCertificate job (highlighted in the screenshot above).

Once that’s done you’ll need to wait about sixty seconds for the DNS records to propagate within AWS. You should then be able to access your api using the new domain:

Thanks to the follow github post for the pointers in the right direction:

aws sam No hosted zones named found

Note that if you get the error above when trying to deploy please ensure that you’ve added the trailing “.” to your Route53 HostedZoneName in the api-gateway in your template.yaml:

        DomainName: !Sub api-${Stage}
        CertificateArn: !Ref ApiCertificate
          HostedZoneName: "" # NOTE: The period at the end is required

Golang and MySQL – DigitalOcean managed cluster

Hey everyone,

Just sharing a helper function to get you started when trying to connect to a mysql managed cluster on DigitalOcean with Golang.

Before we get into the code you’ll need to grab a couple of things from the database dashboard (on DigitalOcean).

  • Open the databases tab
  • Look for the “Connection Details” section
  • Download your ca cert file
  • Copy down your “public network” settings
    • If you’re moving this into a cluster you can use the “private network” settings instead
// initDb creates initialises the connection to mysql
func initDb(connectionString string, caCertPath string) (*sql.DB, error) {

	log.Infof("initialising db connection")

	// Prepare ssl if required:
	if caCertPath != "" {

		log.Infof("Loading the ca-cert: %v", caCertPath)

		// Load the CA cert
		certBytes, err := ioutil.ReadFile(caCertPath)
		if err != nil {
			log.Fatal("unable to read in the cert file", err)

		caCertPool := x509.NewCertPool()
		if ok := caCertPool.AppendCertsFromPEM(certBytes); !ok {
			log.Fatal("failed-to-parse-sql-ca", err)

		tlsConfig := &tls.Config{
			InsecureSkipVerify: false,
			RootCAs:            caCertPool,

		mysql.RegisterTLSConfig("bbs-tls", tlsConfig)

	var sqlDb, err = sql.Open("mysql", connectionString)
	if err != nil {
		return nil, fmt.Errorf("failed to connect to the database: %v", err)

	// Ensure that the database can be reached
	err = sqlDb.Ping()
	if err != nil {
		return nil, fmt.Errorf("error on opening database connection: %s", err.Error())

	return sqlDb, nil

A couple of things to note in the helper above.

  1. You’ll need to provide the path to your downloaded ca-cert as the second argument
  2. Your connection string will need to look something like the following: USERNAME:PASSWORD@tcp(HOST_NAME:PORT_NUMBER)/DB_NAME

Note that the “tcp(…)” bit is required, see the following post for more info:

Depending on which version of the mysql driver you’re using you may also need to revert to the legacy auth mechanism:

failed to connect to the database: default addr for network unknown – MySql and Golang

Hey everyone,

I’m currently setting up a mysql database on DigitalOcean and hit the following error when connecting:

failed to connect to the database: default addr for "DATABASE_CONN_STR" network unknown 

Luckily this turned out to be a pretty easy fix. In the mysql driver repo you can see that the only scenario where this error is shown is when the network doesn’t match “tcp” or “unix”.

// Set default network if empty
	if cfg.Net == "" {
		cfg.Net = "tcp"

	// Set default address if empty
	if cfg.Addr == "" {
		switch cfg.Net {
		case "tcp":
			cfg.Addr = ""
		case "unix":
			cfg.Addr = "/tmp/mysql.sock"
			return errors.New("default addr for network '" + cfg.Net + "' unknown")

	} else if cfg.Net == "tcp" {
		cfg.Addr = ensureHavePort(cfg.Addr)

To fix it, all that was required was to wrap part of the connection string in tcp or unix.

Note that the host name and port on the second line is now wrapped in “tcp(…)”. In my case I didn’t have either set so I find it a bit strange that the “set default address if empty” check was triggered.

Thanks to this stackoverflow post and github link for the info:

Configure AWS Route53 domain to point to DigitalOcean name servers

Hey everyone,

This is a quick post on how to point your AWS Route53 domain to DigitalOcean. I’m currently messing around with Kubernetes on DigitalOcean (DOKS) and want to use their name servers to nginx.

The guide I was following ( was missing a specific walkthrough for Route53 so I’m just posting what I did in case anyone else finds it useful.

To start, open up the “Registered Domains” tab on Route 53:

Then click on your domain and under name servers click “Add or edit name servers”:

Replace the existing aws nameservers with the digital ocean ones and then click update:

The values that you’ll need to use are:


Note that these changes aren’t immediate. However, you should see a success message and receive an email notification stating the changes have been requested.

I found the AWS doco useful when trying to sort this one out: