Container Registry Template for Azure Bicep

Hi everyone,

Just another Bicep template I’m using – this one is for a basic container registry:

param namePrefix string
param location string = resourceGroup().location
param tier string = 'Basic'

var registryName = '${namePrefix}${uniqueString(resourceGroup().id)}'

resource container_registry 'Microsoft.ContainerRegistry/registries@2020-11-01-preview' = {
  name: registryName
  location: location
  sku: {
    name: tier
  }
  properties: {
    adminUserEnabled: false
    policies: {
      quarantinePolicy: {
        status: 'disabled'
      }
      trustPolicy: {
        type: 'Notary'
        status: 'disabled'
      }
      retentionPolicy: {
        days: 7
        status: 'disabled'
      }
    }
    encryption: {
      status: 'disabled'
    }
    dataEndpointEnabled: false
    publicNetworkAccess: 'Enabled'
    networkRuleBypassOptions: 'AzureServices'
    zoneRedundancy: 'Disabled'
    anonymousPullEnabled: false
  }
}

output id string = container_registry.id

See the following definition if you need more info on some of the properties: https://docs.microsoft.com/en-us/azure/templates/microsoft.containerregistry/2019-12-01-preview/registries?tabs=json

Key Vault Template for Azure Bicep

Hi everyone,

I’m currently mucking around with Bicep as a replacement for plain ARM templates. I’ve just created a key vault and thought I’d share it in case anyone else finds it useful:

param namePrefix string
param location string = resourceGroup().location
param tenantId string

var name = '${namePrefix}${uniqueString(resourceGroup().id)}'

resource key_vault 'Microsoft.KeyVault/vaults@2019-09-01' = {
  name: name
  location: location
  properties: {
    sku: {
      family: 'A'
      name: 'standard'
    }    
    tenantId: tenantId
  }
}

resource my_test_secret 'Microsoft.KeyVault/vaults/secrets@2019-09-01' = {
  name: '${key_vault.name}/my-test-secret'
  properties: {
    attributes: {
      enabled: true
    }
  }
}

If you want to add any other properties checkout the arm template definition: https://docs.microsoft.com/en-us/azure/templates/microsoft.keyvault/2019-09-01/vaults?tabs=json

Setup MongoDB on an Azure VM

Hi everyone,

I’m currently using a small linux vm to host a MongoDB instance on Azure. These are the steps I followed:

## MongoDB

### Install
https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/

1. Import the public key: `wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -`
2. Create a list file:  `echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list`
   - This is for Ubuntu 18.04, check version using `lsb_release -a` 
3. Reload the local package database: `sudo apt-get update`
4. Install MongoDb: `sudo apt-get install -y mongodb-org`
5. Pin the version to prevent automatic upgrades 
```
echo "mongodb-org hold" | sudo dpkg --set-selections
echo "mongodb-org-server hold" | sudo dpkg --set-selections
echo "mongodb-org-shell hold" | sudo dpkg --set-selections
echo "mongodb-org-mongos hold" | sudo dpkg --set-selections
echo "mongodb-org-tools hold" | sudo dpkg --set-selections
```

### Allow remote access
https://www.digitalocean.com/community/tutorials/how-to-configure-remote-access-for-mongodb-on-ubuntu-20-04

1. sudo nano /etc/mongod.conf
2. Find the `network interfaces` section
3. Find the `bindIp` value
4. Append the public ip of your vm to this address list `bindIp: 0.0.0.0`
   - *Note that this is the VM ip not the ip of the machine you plan to connect from.*
5. Restart MongoDB `sudo systemctl restart mongod`

### Add authentication
1. Connect to the instance from an ssh connection: `mongo --port 27017'
2. Create the user:
https://stackoverflow.com/a/38921949/522859
```
db.createUser(
  {
    user: "YOUR_USERNAME",
    pwd: "YOUR_PASSWORD",
    roles: [ { role: "root", db: "admin" } ]
  }
)
```
3. Restart with access control: `mongod --auth --port 27017 --dbpath /data/db1`
4. Authenticate as the user adminstrator: `mongo --port 27017 -u "YOUR_USERNAME" -p "YOUR_PASSWORD"  --authenticationDatabase "admin"`
5. Locate the commented out security heading and uncomment it.
6. Add `authorized: enabled`
```
security:
  authorization: enabled
```
7. Restart with access control: `mongod --auth --port 27017 --dbpath /data/db1` 

Setup PostgreSQL on an Azure VM

Hi everyone,

I’m currently testing out a small linux vm with PostgreSQL on Azure. Just sharing the steps for future reference and hopefully it’ll be able to help you out as well.

# Setup PostgreSQL on Azure VM

## Run updates
sudo apt-get update
sudo apt-get install postgresql

## Setup postgresql.conf
1. sudo vim /etc/postgresql/10/main/postgresql.conf
2. Add or uncomment `listen_addresses = '*'`

## Update pg_hba.conf
1. sudo nano /etc/postgresql/10/main/pg_hba.conf
2. Add the following line `host    all             all             151.XXX.XXX.XXX/32        md5` *\*where the ip is your public ip*
3. Restart pgsql `invoke-rc.d postgresql restart`

## Setup a PostgreSQL user
https://serverfault.com/a/248162/148152  

1. sudo -u postgres psql postgres
2. postgres=# \password postgres
3. Enter a new password and then confirm it.

## Done
You should now be able to login remotely (I was using pgAdmin from Windows for this).

Good luck!

Quick Sample to Create a VM – Azure Bicep

Hey everyone,

I’m trying out Bicep for my current side project and have created a small vm to host a PostgreSQL instance. There weren’t a lot of samples to reference in order to get this going so I thought I’d upload it:

I’ve added a github repo if you find that a little easier to go through but all of the code is available below: https://github.com/Buzzology/bicep-vm.git

Update the resource group in the parameters file and then run the following command to deploy it:

az deployment group create --template-file ./main.bicep  --parameters ./parameters/parameters.prod.json -g "{YOUR_RESOURCE_GROUP}"
// main.bicep
targetScope = 'subscription' // tenant', 'managementGroup', 'subscription', 'resourceGroup'

param vmPgsqlUsername string
param vmPgsqlPassword string
param resourceGroupName string = 'not-set'
param namePrefix string = 'not set'


module vnet_generic './vnets/vnet-generic.bicep' = {
  name: 'vnet'
  scope: resourceGroup(resourceGroupName)
  params: {
    namePrefix: '${namePrefix}-vnet'
  }
}


module vm_pgsql './virtual-machines/postgresql/vm-postgresql.bicep' = {
  name: 'vm-pgsql'
  scope: resourceGroup(resourceGroupName)
  params: {
    namePrefix: '${namePrefix}-vm'
    subnetId: vnet_generic.outputs.subnetId
    username: vmPgsqlUsername
    password: vmPgsqlPassword
  }
}
 

output vmName string = vm_pgsql.name
// vnets\vnet-generic.bicep
param namePrefix string = 'unique'
param location string = resourceGroup().location

var name = '${namePrefix}-${uniqueString(resourceGroup().id)}'
var subnetName = 'main-subnet'

resource vnet_generic 'Microsoft.Network/virtualNetworks@2020-08-01' = {
  name: name
  location: location
  properties: {
    addressSpace: {
      addressPrefixes: [
        '10.0.0.0/24'
      ]
    }
    subnets: [
      {
        name: subnetName
        properties: {
          addressPrefix: '10.0.0.0/24'
          delegations: []
          privateEndpointNetworkPolicies: 'Enabled'
          privateLinkServiceNetworkPolicies: 'Enabled'
        }
      }
    ]
    virtualNetworkPeerings: []
    enableDdosProtection: false
  }
}


output vnetId string = vnet_generic.id
output subnetId string = '${vnet_generic.id}/subnets/${subnetName}'
output subnetName string = subnetName
// virtual-machines\gneral\vm-small
param namePrefix string = 'unique'
param location string = resourceGroup().location
param subnetId string
param privateIPAddress string =  '10.0.0.4'

var vmName = '${namePrefix}${uniqueString(resourceGroup().id)}'

resource nic_pgsql 'Microsoft.Network/networkInterfaces@2020-08-01' = {
  name: vmName
  location: location
  properties: {
    ipConfigurations: [
      {
        name: 'ipconfig1'
        properties: {
          privateIPAddress: privateIPAddress
          privateIPAllocationMethod: 'Dynamic'
          subnet: {
            id: subnetId
          }
          primary: true
          privateIPAddressVersion: 'IPv4'
        }
      }
    ]
    dnsSettings: {
      dnsServers: []
    }
    enableAcceleratedNetworking: false
    enableIPForwarding: false
  }
}


output nicId string = nic_pgsql.id
// virtual-machines\general\vm-small.bicep
param namePrefix string = 'unique'
param location string = resourceGroup().location
param subnetId string
param ubuntuOsVersion string = '18.04-LTS'
param osDiskType string = 'Standard_LRS'
param vmSize string = 'Standard_B1s'
param username string
param password string

var vmName = '${namePrefix}${uniqueString(resourceGroup().id)}'

// Bring in the nic
module nic './vm-small-nic.bicep' = {
  name: '${vmName}-nic'
  params: {
    namePrefix: '${vmName}-hdd'
    subnetId: subnetId
  }
}

// Create the vm
resource vm_small 'Microsoft.Compute/virtualMachines@2019-07-01' = {
  name: vmName
  location: location
  zones: [
    '1'
  ]
  properties: {
    hardwareProfile: {
      vmSize: vmSize
    }
    storageProfile: {
      osDisk: {
        createOption: 'FromImage'
        managedDisk: {
          storageAccountType: osDiskType
        }
      }
      imageReference: {
        publisher: 'Canonical'
        offer: 'UbuntuServer'
        sku: ubuntuOsVersion
        version: 'latest'
      }
    }
    osProfile: {
      computerName: vmName
      adminUsername: username
      adminPassword: password
    }
    networkProfile: {
      networkInterfaces: [
        {
          id: nic.outputs.nicId
        }
      ]
    }
  }
}

output id string = vm_small.id

// virtual-machines/postgresql/vm-postgresql.bicep
param namePrefix string
param subnetId string
param username string
param password string

var name = '${namePrefix}-pgsql'

module vm_pgsql '../general/vm-small/vm-small.bicep' = {
  name: name
  params: {
    namePrefix: name
    location: resourceGroup().location
    subnetId: subnetId
    ubuntuOsVersion: '18.04-LTS'
    osDiskType: 'Standard_LRS'
    vmSize: 'Standard_B1s'
    username: username
    password: password    
  }
}

output vmId string = vm_pgsql.outputs.id
// parameters/parameters.prod.json
{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourceGroupName": {
            "value": "prod"
        },
        "namePrefix": {
            "value": "prod"
        },
        "vmPgsqlUsername": {
            "value": "test_username"
        },
        "vmPgsqlPassword": {
            "value": "$1test_password"
        }
    }
}

Unable to parse parameter: azuredeploy.parameters.dev.json – Azure Bicep

Hi everyone,

I’ve been trying out Bicep as a replacement for ARM templates for a mini side project to replace the TFN generator people are currently using on this site. These are a few of the small issues I’ve hit while working with it.

Unable to Parse Parameter

One of the steps on the main tutorial gets you to use a parameters file to make the deployment generic (similar to Terraform). Unfortunately, I started getting the following error:

Unable to parse parameter: azuredeploy.parameters.dev.json

I mistakenly thought that this we referring to the format of the parameter itself. However, in my case it simply meant that it was unable to locate the parameters file. If you’re getting the same error double check the file path and ensure that it’s accessible from the location you’re executing the deployment.

Template parameter JToken type is not valid. Expected ‘Boolean’. Actual ‘String’

This one is pretty straight forward – check the value you’re providing for your parameter. If it’s meant to a boolean ensure that you haven’t accidentally added quotes around it.

// Incorrect
"globalRedundancy": {
    "value": "false"
 }

// Correct
"globalRedundancy": {
    "value": false
 }

Decompilation failed with fatal error “Failed to read file:///C:/bicip….

When attempting to decompile an arm template with a non-json extension I received the following error:

PS E:\repos\bicep> az bicep decompile --files postgresmql.arm
accurate conversion is not possible.
If you would like to report any issues or inaccurate conversions, please see https://github.com/Azure/bicep/issues.
E:\repos\bicep\postgresmql.arm: Decompilation failed with fatal error "Failed to read file:///E:/repos/bicep/postgresmql.json"

The decompile command appears to force a .json extension and automatically renames the argument. The only way around it so far seems to be to rename your file. See the following github issue for more info: https://github.com/Azure/bicep/issues/1912

Thanks,

Chris

Fallback image if src doesn’t exist for image tag – ReactJS

Hey everyone,

I have a small list component that I wanted to populate with a custom image based on the entity’s id. Unfortunately, not all entities have an image so some were appearing broken.

In order to get around this I used the img tag’s onError attribute in ReactJS. If the image can’t be found it the callback is invoked. In the callback I set the src of the image tag to a known placeholder.

<img
   src={`/img/widgets/${widgetId}.jpg`}
   style={{
      height: 50,
      width: 50,
   }}
   onError={(e: any) => e.target.src = '/img/widgets/no-image.jpg'}
/>

Using this attribute allows me to use a known convention to automatically populate images and then fallback to a suitable default if it’s not available.

I found the following links pretty handy when looking for a solution: https://stackoverflow.com/a/64326984/522859

.Net 5.0 – This package is required for the Entity Framework Core Tools to work

I was trying to create a migration in a new project within a solution and received the following error:

Build started…
Build succeeded.
Your startup project ‘SubscriptionManagementGrpcService’ doesn’t reference Microsoft.EntityFrameworkCore.Design. This package is required for the Entity Framework Core Tools to work. Ensure your startup project is correct, install the package, and try again.

The error was a bit confusing because all of my migrations for the rest of the solution were still working and none of my other startup projects had Microsoft.EntityFrameworkCore.Design as a reference.

After a bit of comparing I realised that the other projects all referenced data projects that in turn referenced Microsoft.EntityFramework.Design. Adding the package reference to the subscription data library referenced by my GrpcService fixed the issue.

ElasticSearch on AWS – Anonymous is not authorized to perform es:ESHttpGet

Hi everyone,

I am trying out ElasticSearch on AWS and ran into the following error while trying to access the provided Kabana endpoint:

{"Message":"User: anonymous is not authorized to perform: es:ESHttpGet"}

This turned out to be pretty simply, I just needed to whitelist my IP. Go to your search domain in the aws console, click access and finally select “Modify access policy”:

Then all you need to do is add another statement that gives your current ip access:

{ 
"Version": "2012-10-17", 
"Statement": [
{
 "Effect": "Allow",
"Principal": {
 "AWS": "*"
},
"Action": [
"es:ESHttp*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"192.0.2.0/24"
]
}
},
"Resource": "arn:aws:es:us-west-1:987654321098:domain/test-domain/*"
}
]
}

Feel free to comment below if you hit any issues, but I also found the following links pretty helpful:

https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-ac.html#es-ac-types-ip

https://aws.amazon.com/premiumsupport/knowledge-center/anonymous-not-authorized-elasticsearch/

Docker Volumes Mounting File as Folder on Windows

Hey everyone,

I ran into a small issue today using docker-compose. I had a config file I was trying to mount as a volume however running “docker-compose up” generated a folder instead.

ERROR: for grafana Cannot start service grafana: OCI runtime create failed: container_linux.go:349: starting container process caused “process_linux.go:449: container init caused \”rootfs_linux.go:58: mounting \\\”/host_mnt/e/repos/Sample-Twitch/grafana/datasources.yml\\\” to rootfs \\\”/var/lib/docker/overlay2/f5c9553407551b68dc5d5877d748ba2bf7d2f0dd2ad12c73e39be40af185c82d/merged\\\” at \\\”/var/lib/docker/overlay2/f5c9553407551b68dc5d5877d748ba2bf7d2f0dd2ad12c73e39be40af185c82d/merged/etc/grafana/provisioning/datasources/prometheus.yaml\\\” caused \\\”not a directory\\\”\””: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type ERROR: Encountered errors while bringing up the project.

The issue turned out to be an oversight on my part, I’d misspelled the config filename and docker was unable to find it. Consequently, it generated the missing file as a directory.

It took a bit of Googling to narrow this down, but eventually this stackoverflow post pointed me in the right direction: https://stackoverflow.com/a/44950494/522859

Thanks,
Chris