summaryrefslogtreecommitdiff
path: root/content/posts
diff options
context:
space:
mode:
authorclaw0ry <me@claw0ry.net>2024-12-11 13:56:52 +0100
committerclaw0ry <me@claw0ry.net>2024-12-11 13:56:52 +0100
commit4719cc03837490ed4bf1b9725d75a686e56e5a6a (patch)
tree769dd3a3a87153df049b3043196bd131495b10ad /content/posts
fresh start
Diffstat (limited to 'content/posts')
-rw-r--r--content/posts/automatically-deploy-your-website-with-git-hooks.md33
-rw-r--r--content/posts/automatically-release-go-app-with-goreleaser-and-github.md40
-rw-r--r--content/posts/azure-device-flow-authentication-in-go.md179
-rw-r--r--content/posts/azure-functions-custom-handler-golang-blob-storage-output-binding.md124
-rw-r--r--content/posts/basic-tasks-to-get-you-started-automating-with-powershell.md104
-rw-r--r--content/posts/basic_linux.md624
-rw-r--r--content/posts/cgit_idle_no_data.md79
-rw-r--r--content/posts/compare-two-dates-in-servicenow.md99
-rw-r--r--content/posts/copy-to-clipboard-in-servicenow.md25
-rw-r--r--content/posts/deploy-hugo-on-git-push.md13
-rw-r--r--content/posts/enable-external-booking-of-meeting-rooms.md140
-rw-r--r--content/posts/exchange-online-check-your-tenant-for-forwarding-rules.md106
-rw-r--r--content/posts/generate-access-tokens-for-microsoft-services-with-powershell.md82
-rw-r--r--content/posts/generate-microsoft-partner-center-refresh-token.md86
-rw-r--r--content/posts/get-array-of-unique-objects-in-servicenow.md55
-rw-r--r--content/posts/get-status-code-for-failed-webrequests-in-powershell.md88
-rw-r--r--content/posts/get-type-definition-in-powershell.md215
-rw-r--r--content/posts/get_changed_fields_in_server_scripts_in_servicenow.md36
-rw-r--r--content/posts/getting-started-with-azure-functions.md231
-rw-r--r--content/posts/getting_started_with_powershell_remoting_on_linux.md6
-rw-r--r--content/posts/golang-format-date-and-time.md76
-rw-r--r--content/posts/golang-generate-random-numbers.md53
-rw-r--r--content/posts/handling-request-and-response-in-servicenow-scripted-rest-api.md164
-rw-r--r--content/posts/improving-powershell-profile.md94
-rw-r--r--content/posts/initiate-config-trick-in-python.md63
-rw-r--r--content/posts/interacting_with_azure_keyvault_in_go.md293
-rw-r--r--content/posts/linux_server_hardning.md144
-rw-r--r--content/posts/make-git-work-with-multiple-accounts.md116
-rw-r--r--content/posts/monitor-azure-keyvault-for-expiring-secrets-and-certificates.md13
-rw-r--r--content/posts/powershell-extract-windows-spotlight-images.md73
-rw-r--r--content/posts/rss-feed-urls.md38
-rw-r--r--content/posts/servicenow-http-client.md180
-rw-r--r--content/posts/servicenow-sending-notifications-to-microsoft-teams.md172
-rw-r--r--content/posts/setting_up_puppet_lab_with_virtual_box.md291
-rw-r--r--content/posts/simple-url-shortner-with-powershell-and-azure-functions.md66
-rw-r--r--content/posts/using_go_vanity_url_with_cgit.md46
-rw-r--r--content/posts/web_requests_with_basic_authentication_in_powershell.md77
-rw-r--r--content/posts/working_with_comments_and_work_notes_in_servicenow.md30
38 files changed, 4354 insertions, 0 deletions
diff --git a/content/posts/automatically-deploy-your-website-with-git-hooks.md b/content/posts/automatically-deploy-your-website-with-git-hooks.md
new file mode 100644
index 0000000..a2af52e
--- /dev/null
+++ b/content/posts/automatically-deploy-your-website-with-git-hooks.md
@@ -0,0 +1,33 @@
+---
+title: 'Automatically Deploy Your Website With Git Hooks'
+description: 'How to automatically deploy your website using git hooks'
+date: '2024-11-14'
+tags: ['linux', 'git', 'web']
+draft: true
+---
+
+In the spirit of [small web](https://small-web.org) and [indie web](https://indieweb.org) let's take a look at how we can deploy our own website with git hooks and a simple shell script.
+
+<!--more-->
+
+In the old days, before Wordpress and git, you developed your website locally on your computer using HTML, CSS and Javascript. When you wanted to deploy a new version of your site, you simply opened a FTP program, connected to the webserver and drag-and-drop the files into it.
+
+In the modern days, things have become more complicated and people often tie into paid services to avoid the complexity of deploying your own website. But, if you are a little interessted in web servers, git and owning your content it doesnt have to be that complicated.
+
+## 1. Setup a server
+
+You own server at home.
+Using Linode/Digital Ocean.
+
+## 2. Setup a domain
+
+## 2. Install a web server
+
+Either nginx or Apache2
+
+## 3. Install git
+
+## 4. Setup git hooks
+
+## 5. Deploy with git push
+
diff --git a/content/posts/automatically-release-go-app-with-goreleaser-and-github.md b/content/posts/automatically-release-go-app-with-goreleaser-and-github.md
new file mode 100644
index 0000000..9b9f9e6
--- /dev/null
+++ b/content/posts/automatically-release-go-app-with-goreleaser-and-github.md
@@ -0,0 +1,40 @@
+---
+title: 'Automatically release Go app with Goreleaser and Github'
+description: "We're taking a look at how to automatically release Go applications with Goreleaser and Github"
+tags: ['go', 'golang', 'github', 'automation']
+date: 2021-08-24T01:02:53+02:00
+draft: true
+---
+
+One of the biggest benefits of Go is that you can distribute your application as a single binary and it's pretty simple for Go programmers to install any Go app using `go install`. But most times you also want to provide a way for non-Go users to download your application, so in this post we're going to take a look at how you automatically can publish a new release on Github with GoReleaser that will include your binary.
+
+<!--more-->
+
+## What is GoReleaser?
+
+- brief introduction
+- why we need goreleaser
+- ease of use
+
+## Setup our sample project
+
+- create a new gorelaser-example repo with a small golang app
+- add the finished project as another branch
+- git clone repo
+
+## Install GoReleaser locally
+
+- use brew for macos
+
+## Create GoReleaser config
+
+- `goreleaser init`
+- change archives format to binary
+
+## Testing gorelease before activating in prod
+
+- how to to use the --skip-publish and --snapshot params
+
+## Add goreleaser to Github Actions workflow
+
+- only trigger when a new tag is pushed
diff --git a/content/posts/azure-device-flow-authentication-in-go.md b/content/posts/azure-device-flow-authentication-in-go.md
new file mode 100644
index 0000000..98f1632
--- /dev/null
+++ b/content/posts/azure-device-flow-authentication-in-go.md
@@ -0,0 +1,179 @@
+---
+title: "Azure Device Flow Authentication in Go"
+description: "Create a simple test application in Go for authenticating to Azure with Device Flow mechanism"
+tags: ['go', 'golang', 'azure']
+date: 2021-10-22T11:44:49+02:00
+draft: false
+---
+
+One of the usecases of Go is to create CLI tools for developers to ease their work. Often it involves creating/modifying resources or retrieving information from cloud services. In this post we're going to setup device flow authentication against AzureAD in Go.
+
+<!--more-->
+
+## Prerequisites
+
+I'm assuming you already have Go installed. You also need an Azure tenant to setup authentication against. This requires that you have permissions to create "App Registrations" in Azure.
+
+## Setting up App Registration in Azure
+
+The first thing we must do is to create our application in Azure, which is where we set what permissions authenticated users will have and who's allowed to login through our application.
+
+1. Go to Azure portal
+2. Search for and click on "App Registrations"
+3. Click "New Registrations" in the left corner
+4. Give your application a name (I'm going for "Azure Device Flow Test")
+5. Choose which directories are allowed to authenticate. In this example we're going to use the single tenant (first option)
+6. Click "Register"
+
+Now you have an application in Azure. By default all users in our organization has access to authenticate through our app, but the app does not have any permissions yet. For this example, we're going to give it permissions to "Azure Service Management" API, which means that users authenticated through our app can manage resources in Azure.
+
+NOTE: Take note of "Application (client) ID" and "Directory (tenant) ID". We're going to need those later when setting up our Go CLI.
+
+### API Permissions
+
+For our application to be able to do anything, we must assign it some API Permissions.
+
+1. Click on "API Permissions" in the left menu
+2. Click on "Add Permission"
+3. Then choose the "Azure Service Management"
+4. We only have one options here, which is "user_impersonation". Check it
+5. Click on "Add permissions"
+
+Instead of having our users consent to that this application can manage Azure resources on their behalf, we're going to grant admin consent, which means that we grant access on behalf of all users of our application. NOTE: You must be an administrator in the tenant to grant admin consent.
+
+1. Click on "Grant admin consent for <you_tenant_name>"
+2. Click on "Yes" in the popup
+
+### Authentication
+
+Authentication in Azure uses the OAuth2 protocol, which usually takes a redirect URL that the user will be sent to when authentication was successfull. But since we're creating a CLI application, we dont have a redirect URL. Therefor, we must tell Azure that this application is a public application without a redirect URL.
+
+1. In our App Registration, click on "Authentication" in the left menu
+2. Make sure "Yes" is checked on "Allow public client flows"
+3. Click "Save" at the top
+
+As of now, our application allows anyone within our organization/directory to authenticate through this application and then give the client access to manage resources in Azure on behalf of the user. The client will only have the same access that the user actually has.
+
+If you want to limit who can authenticate through this app even further, you can go into "Enterprise Application" and set some properties there.
+
+1. Search for and click on "Enterprise Application"
+2. Find your app in the list and click on it
+3. Click on "Properties" in the left menu
+4. Make sure "Assignment required" is "Yes" and click on "Save"
+5. Click on "Users and groups"
+6. Add users and/or groups that should be allowed access
+
+## Setting up our Go application
+
+So, now that we're done with the Azure part, let's setup our Go CLI.
+
+We're going to build a CLI application that will use device flow authentication to Azure and then list all resources groups in a subscription.
+
+### Project setup
+
+Since this a very simple CLI application, we are only going to have a `main.go` file.
+
+```bash
+mkdir azure-device-flow-test
+cd azure-device-flow-test
+
+go mod init github.com/madsaune/azure-device-flow-test
+
+touch main.go
+```
+
+### Device Flow Authentication
+
+The first step is to setup authentication.
+
+```go
+// main.go
+
+package main
+
+import (
+ "log"
+
+ "github.com/Azure/go-autorest/autorest/azure/auth"
+)
+
+func main() {
+ deviceConfig := auth.NewDeviceFlowConfig("<app_id>", "<tenant_id>")
+ authorizer, err := deviceConfig.Authorizer()
+ if err != nil {
+ log.Fatalf("error: could not get authorizer, %v", err)
+ }
+}
+```
+
+This is all the code we need to authenticate to Azure. When authentication is done you can pass the `authorizer` struct to clients in the Azure SDK for Go, which we will do next.
+
+NOTE: If you get the error "failed to get oauth token from device flow: failed to finish device auth flow: autorest/adal/devicetoken: invalid_client: AADSTS7000218: The request body must contain the following parameter: 'client_assertion' or 'client_secret'." you most likely forgot to make your App Registration in Azure public.
+
+### List all resource groups
+
+Now that we have our `authorizer` struct, we can pass it to our `resources.GroupsClient` when retrieving our resource groups.
+
+```go
+import (
+ // ...
+ "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2020-10-01/resources"
+)
+
+// ...
+
+ctx := context.Background()
+
+// We initialize a new GroupsClient with our subscription ID
+// and the clients authorizer to the one we got from the
+// device flow authentication, so that the GroupsClient can
+// authenticate as us
+c := resources.NewGroupsClient("<subscription_id>")
+c.Authorizer = authorizer
+
+// ListComplete will give us all resource groups within the
+// subscription that the GroupsClient is initialized for
+groupList, err := c.ListComplete(ctx, "", nil)
+if err != nil {
+ log.Fatalf("error: could not list resource groups, %v", err)
+}
+
+// Loop through the result
+for groupList.NotDone() {
+ // Retrieve the current Group struct from the iterator
+ group := groupList.Value()
+
+ // Let's print the name and location of the resource group
+ fmt.Printf("- %s (Location: %s)\n", *group.Name, *group.Location)
+
+ // NextWithContext() will return an error if there are noe more results.
+ // We then want to exist
+ if err := groupList.NextWithContext(ctx); err != nil {
+ break
+ }
+}
+```
+
+You must provide your own `app_id`, `tenant_id` and `subscription_id`. To see the whole code, you can find it on Github [here](https://github.com/madsaune/azure-device-flow-test).
+
+### Build and run
+
+If we try to build and run our application it will behave something like this.
+
+```bash
+mm@box:~/code/azure-device-flow-test$ go build
+mm@box:~/code/azure-device-flow-test$ ./azure-device-flow-test
+mm@box:~/code/azure-device-flow-test$ 2021/10/22 12:26:42 To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code CLEUYSFZK to authenticate.
+- my-example-rg-1 (Location: westeurope)
+- my-example-rg-2 (Location: westeurope)
+- my-example-rg-3 (Location: westeurope)
+- my-example-rg-4 (Location: westeurope)
+```
+
+On line 3 you must open the URL in a browser and enter the code in the box, then authenticate as a user in your tenant. When that's done, it will list resource groups in the specified subscription.
+
+## Next steps
+
+Now we have a great foundation to start working with resources in Azure through Go. We could initialize more clients and use the same authorizer.
+
+One thing that is a must to implement is getting `app_id`, `tenant_id` and `subscription_id` from environment variables. These ID's should not be hardcoded in your application.
diff --git a/content/posts/azure-functions-custom-handler-golang-blob-storage-output-binding.md b/content/posts/azure-functions-custom-handler-golang-blob-storage-output-binding.md
new file mode 100644
index 0000000..75e790e
--- /dev/null
+++ b/content/posts/azure-functions-custom-handler-golang-blob-storage-output-binding.md
@@ -0,0 +1,124 @@
+---
+title: "Golang: Azure Functions Blob Storage Output Binding"
+date: 2024-02-26T22:19:04+01:00
+draft: false
+---
+
+Lately, I've been setting up an Azure Function App with a custom handler written in Go. One of my functions needs to download a file from an external URL and then upload that file to Azure Blob Storage. Unfortunately, neither the documentation on Microsoft Learn or the examples on Github mentions how to use Blob Storage as output binding for custom handlers. So I decided to do a little write up on how I solved it.
+
+<!--more-->
+
+There are two types of files that you can upload:
+
+- binary
+- textfile
+
+If you are uploading a binary the Azure Function App host expects the file as byte array (`[]byte`) otherwise it expects the file as base64 encoded string.
+
+> NOTE: I dont handle errors in these examples to keep the code short, but you should always handle errors!
+
+## As binary
+
+To upload a file a binary to Blob Storage using output bindings we need to specify in the `function.json` that the `dataType` will be 'binary'. Then when we return our custom handler payload to Azure Function App, the `returnValue` must be a byte array (`[]byte`).
+
+```json
+// file: function.json
+
+{
+ "bindings": [
+ // ...
+ {
+ "name": "$return",
+ "type": "blob",
+ "direction": "out",
+ "path": "reports/my_report.csv",
+ "connection": "AzureWebJobsStorage",
+ "dataType": "binary"
+ }
+ ]
+}
+```
+
+```go
+// file: handler.go
+
+type BlobOutputBinding struct {
+ ReturnValue interface{}
+}
+
+func DownloadHandler(w http.ResponseWriter, r *http.Request) {
+
+ // ... Logic for downloading file
+
+ // 1. Convert our response body (a.k.a downloaded file) to []byte
+ data, _ := io.ReadAll(resp.Body)
+
+ // 2. Create our custom handler response payload
+ // since we named our blob output `$return`, we can use the `returnValue` instead of `Outputs["outblob"]`
+ binding := BlobOutputBinding{
+ ReturnValue: data
+ }
+
+ // 3. convert the binding to JSON
+ response, _ := json.Marshal(binding)
+
+ // 4. Respond to Azure Function with out binding
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusOK)
+ w.Write(response)
+}
+```
+
+## As text file
+
+If we want to upload a text file the `dataType` in `function.json` must be 'string', or we can leave it out because 'string' is the default value. When `dataType` is 'string' the Azure Function App expects our custom handler to return the file as a base64 encoded string.
+
+```json
+// file: function.json
+
+{
+ "bindings": [
+ // ...
+ {
+ "name": "$return",
+ "type": "blob",
+ "direction": "out",
+ "path": "reports/my_report.csv",
+ "connection": "AzureWebJobsStorage",
+ }
+ ]
+}
+```
+
+```go
+// file: handler.go
+
+type BlobOutputBinding struct {
+ ReturnValue interface{}
+}
+
+func DownloadHandler(w http.ResponseWriter, r *http.Request) {
+
+ // ... logic for downloading file
+
+ // 1. convert our response body (a.k.a downloaded file) to []byte
+ data, _ := io.ReadAll(resp.Body)
+
+ // 2. convert to base64
+ encoded := base64.StdEncoding.EncodeToString(data)
+
+ // 3. create our custom handler response payload
+ // since we named our blob output `$return`, we can use the `returnValue` instead of `Outputs["outblob"]`
+ binding := BlobOutputBinding{
+ ReturnValue: encoded
+ }
+
+ // 3. convert the binding to JSON
+ response, _ := json.Marshal(binding)
+
+ // 4. respond to Azure Function with out binding
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusOK)
+ w.Write(response)
+}
+```
diff --git a/content/posts/basic-tasks-to-get-you-started-automating-with-powershell.md b/content/posts/basic-tasks-to-get-you-started-automating-with-powershell.md
new file mode 100644
index 0000000..932accc
--- /dev/null
+++ b/content/posts/basic-tasks-to-get-you-started-automating-with-powershell.md
@@ -0,0 +1,104 @@
+---
+title: "Basic tasks to get you started automating with Powershell"
+date: 2024-04-05T22:00:00+02:00
+tags: ['Powershell']
+draft: true
+---
+
+Here are 10 common tasks that you will need to master when automating in Powershell (or any language for that matter).
+
+## 1. Reading and writing files
+
+```powershell
+# Let's write something to a file
+PS> "Hello, World!" | Out-File -FilePath myfile.txt
+
+# Read contents of file
+PS> Get-Content -Path myfile.txt
+Hello, World!
+
+# Shorthand
+PS> gc myfile.txt
+Hello, World!
+```
+
+## 2. Working with data
+
+### Convert Powershell object to JSON
+
+```powershell
+PS> $myobj = [PSCustomObject]@{
+> Name = "dotpwsh"
+> Homepage = "https://dotpwsh.com"
+> Twitter = "@moiaune"
+> }
+PS> $myobj | ConvertTo-Json
+{
+ "Name": "dotpwsh",
+ "Homepage": "https://dotpwsh.com",
+ "Twitter": "@moiaune"
+}
+```
+
+### Convert JSON string to Powershell object
+
+```powershell
+PS> $jsonData = @"
+> {
+> "Name": "dotpwsh",
+> "Homepage": "https://dotpwsh.com",
+> "Youtube": "https://youtube.com/@moiaune"
+> }
+> "@
+PS> $jsonData
+{
+ "Name": "dotpwsh",
+ "Homepage": "https://dotpwsh.com",
+ "Youtube": "https://youtube.com/@moiaune"
+}
+PS> $jsonData | ConvertFrom-Json
+
+Name Homepage Youtube
+---- -------- -------
+dotpwsh https://dotpwsh.com https://youtube.com/@moiaune
+
+```
+
+### Convert CSV string to Powershell object
+
+```powershell
+PS> $csvData = @"
+> name,homepage,twitter
+> dotpwsh,https://dotpwsh.com,@moiaune
+> "@
+PS> $csvData
+name,homepage,twitter
+dotpwsh,https://dotpwsh.com,@moiaune
+PS> $csvData | ConvertFrom-Csv
+
+name homepage twitter
+---- -------- -------
+dotpwsh https://dotpwsh.com @moiaune
+```
+
+### Convert Powershell object to CSV
+
+```powershell
+PS> $myobj = [PSCustomObject]@{
+> Name = "dotpwsh"
+> Homepage = "https://dotpwsh.com"
+> Twitter = "@moiaune"
+> }
+PS> $myobj | ConvertTo-Csv
+"Name","Homepage","Twitter"
+"dotpwsh","https://dotpwsh.com","@moiaune"
+```
+
+
+## 3. Interacting with REST API's
+
+
+## 4. Archiving and extracting files
+
+
+## 5. Working with Regular Expresions
diff --git a/content/posts/basic_linux.md b/content/posts/basic_linux.md
new file mode 100644
index 0000000..ff30544
--- /dev/null
+++ b/content/posts/basic_linux.md
@@ -0,0 +1,624 @@
+---
+title: 'Linux Basics'
+description: 'A basic introduction to Linux'
+date: '2024-06-03'
+tags: ['linux', 'cli']
+toc: true
+url: basic-linux
+---
+
+This is a very brief introduction to working with Linux on the command line. It's a collection of commands and tips&tricks that I have collected through out my Linux journey. Some of it spesific to Debian derivatives. Maybe you find something usefull too. I will probably update this from time to time. You can look at the history if you want to see whats changed.
+
+<!--more-->
+
+## Find OS information
+
+Print information about your Linux distro.
+```console
+lsb_release -a
+cat /etc/os-release
+uname -a
+```
+
+See how long your machine has been running since last boot.
+
+```console
+claw0ry@lnx:~$ uptime
+ 11:36:40 up 4:02, 1 user, load average: 0.00, 0.00, 0.00
+```
+
+## Built-in documentation with manpages
+
+In Linux we can use the **man** command to read built-in documentation.
+
+```plaintext
+man COMMAND
+```
+
+To list all available manpages.
+
+```console
+man -k .
+```
+
+You can combine it with grep to filter the result if you are not quite sure what the manpage entry is called.
+
+```console
+claw0ry@lnx:~$ man -k . | grep ssh
+ssh (1) - OpenSSH remote login client
+ssh-add (1) - adds private key identities to the OpenSSH authentication agent
+ssh-agent (1) - OpenSSH authentication agent
+ssh-argv0 (1) - replaces the old ssh command-name as hostname handling
+ssh-copy-id (1) - use locally available keys to authorise logins on a remote machine
+ssh-keygen (1) - OpenSSH authentication key utility
+ssh-keyscan (1) - gather SSH public keys from servers
+ssh-keysign (8) - OpenSSH helper for host-based authentication
+ssh-pkcs11-helper (8) - OpenSSH helper for PKCS#11 support
+ssh-sk-helper (8) - OpenSSH helper for FIDO authenticator support
+ssh_config (5) - OpenSSH client configuration file
+sshd (8) - OpenSSH daemon
+sshd_config (5) - OpenSSH daemon configuration file
+~:$ man sshd
+```
+
+## Manage files and directories
+
+### List files and directories
+
+```console
+# list name of files and directories under /var
+ls /var
+
+# show permissions, owners, size, modification date etc for each file and directory in /var
+ls -l /var
+
+# same as above, but also recursivly
+ls -lR /var
+
+# '-h' lists sizes in human readble format, '-F' will append a '/' to all directories
+ls -lhF /var
+
+# '-a' will also list hidden files and directories (that starts with '.')
+ls -la $HOME
+```
+
+### Finding files
+
+```console
+# list all files and directories in /var
+find /var
+
+# list all files and directories in /var in 'ls' style
+find /var -ls
+
+# find all files that ends with '.log' in /var and execute 'type' on each file
+find /var -name "*.log" -exec type {} \;
+```
+
+### Create files
+
+```console
+touch file.txt
+
+vim file.txt
+
+nano file.txt
+
+echo "Some content" > file.txt
+
+cat file.txt > another_file.txt
+
+ls -lR /var > dir_list.txt
+```
+
+Write directly from **stdin** to file (also work with append).
+
+```console
+claw0ry@lnx:~$ cat > file.txt
+Write some
+lines
+of text
+<CTRL-D>
+claw0ry@lnx:~$ cat file.txt
+Write some
+lines
+of text
+```
+
+
+### Search content of files
+
+We can use **grep** to search for spesific phrases/words or patterns in a file or files.
+
+```console
+grep 'Failed' /var/log/auth.log
+```
+
+### Search logfiles
+
+Debian derivatives now use **journalctl** to display logs. Here are some basic filtering techniques.
+
+- '--since': Filter on time. Can use 'yesterday', 'today' or a datetime '2024-07-09', '2024-07-09 18:00:00'
+- '--grep': Filter using grep on the `MESSAGE=` field
+- '--unit': Filter on the service, e.g ssh, nginx, apache2 etc
+
+```console
+journalctl --since yesterday --grep 'failed' --unit ssh
+```
+
+### Archiving files and directories
+
+In Linux we use the **tar** command to create an archive. In most cases we also want to compress it with zip.
+
+- 'c': Create an archive
+- 'v': Verbose
+- 'z': Compress the archive with zip
+- 'f': Specify output file
+
+```console
+# backup my home directory to mybackup.tar.gz
+tar -cvzf mybackup.tar.gz /home/claw0ry
+```
+
+If we omit the 'z' flag, tar would create an archive with the exact same size as my homefolder.
+
+We can list the contents of a tar archive (or tarball).
+
+```console
+tar -tvf mybackup.tar.gz
+```
+
+To decompress or unpack an archive we also use **tar**.
+
+- 'x': Extract an archive
+- 'C': Use another directory than current working directory
+
+```console
+# extract to current working directory
+tar -xf mybackup.tar.gz
+
+# or to a specific directory
+tar -xf mybackup.tar.gz -C /tmp/backup
+```
+
+### Transfer files
+
+We can transfer files to and from other computers/servers with **scp** or **rsync**.
+
+NOTE: If we have the same username on both ends, we dont need to specify the USER.
+
+Using **scp**.
+
+```plaintext
+scp [src] [dest]
+
+# from local to remote
+scp [local-path] [USER@]HOST:[remote-path]
+
+# from remote to local
+scp [USER@]HOST:[remote-path] [local-path]
+```
+
+Using **rsync**.
+
+```plaintext
+rsync -aP [src] [dest]
+
+# from local to remote
+rsync -aP [local-path] [USER@]HOST:[remote-path]
+
+# from remote to local
+rsync -aP [USER@]HOST:[remote-path] [local-path]
+
+# optionally we can specify -n to do a dry-run to see what would happen
+rsync -naP [local-path] [USER@]HOST:[remote-path]
+```
+
+Real world examples.
+
+```console
+# copy my local '.bashrc' to a lab server
+scp /home/claw0ry/.bashrc lab_user@lab1.example.com:/home/lab_user/.bashrc
+
+# sync my blog folder to webserver
+rsync -aP /home/claw0ry/code/blog web.claw0ry.net:/var/www
+```
+
+### Change permissions, owners and groups
+
+Which files and directories can be accessed, modified etc by whom is determined by permissions and owners.
+
+We can see the permissions and owners of a file/directory with **ls**.
+
+```console
+claw0ry@lnx:~$ ls -la /home/cloud_user
+total 20
+drwxr-xr-x 3 cloud_user cloud_user 4096 Jul 9 08:25 .
+drwxr-xr-x 4 root root 4096 Aug 24 2023 ..
+-rw-r--r-- 1 cloud_user cloud_user 1 Feb 29 19:38 .bash_history
+-rw-r--r-- 1 cloud_user cloud_user 0 Feb 29 19:16 .cloud-locale-test.skip
+-rw------- 1 cloud_user cloud_user 57 Jul 9 08:17 .lesshst
+drwx------ 2 cloud_user cloud_user 4096 Aug 24 2023 .ssh
+-rw-r--r-- 1 cloud_user cloud_user 0 Aug 28 2023 .sudo_as_admin_successful
+```
+The first character idicates what kind of item it is:
+
+- '-': file
+- 'd': directory
+- 'l': symbolic link
+
+Next we have the permissions divided into three groups. The first three are the owners permissions, then we have the groups permissions and lastly others permissions.
+
+- '-': none
+- 'r': read
+- 'w': write
+- 'x': execute
+
+The owner of the file is specified in the first column that you see `cloud_user`, and the group is specified in the column next to it. In this example we can see that all files and directories is owned by the `cloud_user` user and `cloud_user` group execpt for the parent directory which is owned by `root` user and `root` group.
+
+#### Change owners and groups
+
+We can change the owner and group with the **chown** command.
+
+```plaintext
+chown [-R] [OWNER][:GROUP] FILE
+
+# only change owner
+chown OWNER FILE
+
+# only change group
+chown :GROUP FILE
+
+# change both, but not the same
+chown OWNER:GROUP FILE
+
+# change both to the same
+chown OWNER: FILE
+```
+
+- 'R': Recursive (will also change owner for all subfiles and directories)
+
+```console
+claw0ry@lnx:~$ cd /home/cloud_user
+claw0ry@lnx:~$ touch file.txt
+claw0ry@lnx:~$ ls -l file.txt
+-rw-r--r-- 1 cloud_user cloud_user 0 Jul 9 08:37 file.txt
+claw0ry@lnx:~$ chown another_user:another_user file.txt
+claw0ry@lnx:~$ ls -l file.txt
+-rw-r--r-- 1 another_user another_user 0 Jul 9 08:37 file.txt
+claw0ry@lnx:~$ chown cloud_user file.txt
+-rw-r--r-- 1 cloud_user another_user 0 Jul 9 08:37 file.txt
+claw0ry@lnx:~$ chown :cloud_user file.txt
+-rw-r--r-- 1 cloud_user cloud_user 0 Jul 9 08:37 file.txt
+```
+
+#### Change permissions
+
+To set permissions we use the **chmod** command.
+
+There are two modes to set permissions with; symbolic and octal.
+
+##### Octal mode
+
+```plaintext
+chmod OCTAL FILE
+```
+
+- read(r): 4
+- write(w): 2
+- execute(x): 1
+
+To calculate the permissions bits we just need to add the permissions together. So for read and write access it will be `4+2=6`. For all permissions it will be `4+2+1=7`.
+
+Let's say we have `file.txt` and we want the owner to have full permissions, the group to have read and write and others to have none. This would calculate to `760`. We can set these permissions with the following command.
+
+```console
+claw0ry@lnx:~$ chmod 760 file.txt
+claw0ry@lnx:~$ ls -l file.txt
+-rwxrw---- 1 cloud_user cloud_user 0 Jul 9 08:37 file.txt
+```
+
+These are the most common permissions used:
+
+- 777 (everyone has full permissions)
+- 644 (owner has read+write, and everyone else has read)
+- 750 (owner has full permissions, groups has read+execute, and everyone else has none)
+- 600 (owner is the only user that has access)
+
+##### symbolic mode
+
+```plaintext
+chmod MODE FILE
+```
+
+- '+': add permission
+- '-': remove permission
+- '=': set permisions
+- 'u': owner (user)
+- 'g': group
+- 'o': others
+
+```console
+# add read permissions for the owner
+chmod u+r file.txt
+
+# remove execute for owner
+chmod u-x file.txt
+
+# set owners permissions to read, write, execute
+chmod u=rwx file.txt
+
+```
+
+We can combine permissions with `,`.
+
+```console
+claw0ry@lnx:~$ ls -l file.txt
+-rwxrw---- 1 cloud_user cloud_user 0 Jul 9 08:37 file.txt
+claw0ry@lnx:~$ chmod u=rwx,g=rw,o=r file.txt
+claw0ry@lnx:~$ ls -l file.txt
+-rwxrw-r-- 1 cloud_user cloud_user 0 Jul 9 08:37 file.txt
+```
+
+## Input/output redirection
+
+In Linux shells there is a concept of three streams.
+
+- The standard input (stdin), which takes the users input.
+- The standard out (stdout), which is the output of a command. It is usually displayed in your terminal.
+- The standard error (stderr), which is the error messages from a command. This is also usually displayed in your terminal alongside stdout.
+
+Write errors to file instead of console.
+
+```console
+# no redirection
+claw0ry@lnx:~$ ls -l nonexisting
+ls: nonexisting: No such file or directory
+
+# with redirection of stderr
+claw0ry@lnx:~$ ls -l nonexisting 2> ls_error.txt
+claw0ry@lnx:~$ cat ls_error.txt
+ls: nonexisting: No such file or directory
+```
+
+Redirect **stdout** to a file and only show errors in the console.
+
+```console
+# these two are equivalent
+find /root 1> dirs_i_can_read.txt
+find /root > dirs_i_can_read.txt
+```
+
+Redirect both **stdout** and **stderr** to the same destination.
+
+```console
+find /root 2>&1 root_dirs.txt
+```
+
+This tells Linux to redirect **stderr** to **stdout** and then **stdout** to a file.
+
+
+## Reboot and PowerOff
+
+```plaintext
+shutdown [OPTIONS] [TIME] [WALL...]
+```
+
+```console
+# poweroff
+poweroff
+shutdown -P
+shutdown -P 20:00
+shutdown -P +5
+shutdown -P +5
+
+# reboot
+reboot
+shutdown -r [TIME]
+shutdown -r 20:00
+shutdown -r +5 'Server will be rebooted for maintenance!'
+```
+
+## Manage users and groups
+
+```console
+# Add a new user
+adduser LOGIN
+
+# add user to group
+gpasswd -a USER GROUP
+
+# give sudo access
+cat /etc/sudoers
+gpasswd -a USER sudo
+```
+
+## Disk space
+
+List filesystem space usage.
+
+```console
+claw0ry@lnx:~$ df -h
+Filesystem Size Used Avail Use% Mounted on
+udev 467M 0 467M 0% /dev
+tmpfs 96M 484K 95M 1% /run
+/dev/nvme0n1p1 20G 2.1G 17G 11% /
+tmpfs 477M 0 477M 0% /dev/shm
+tmpfs 5.0M 0 5.0M 0% /run/lock
+/dev/nvme0n1p15 124M 12M 113M 10% /boot/efi
+tmpfs 96M 0 96M 0% /run/user/1001
+tmpfs 96M 0 96M 0% /run/user/1002
+```
+
+List directory space usage.
+
+```console
+claw0ry@lnx:~$ du -hs /home/cloud_user
+36K /home/cloud_user
+claw0ry@lnx:~$ sudo du -hs /var/log
+55M /var/log
+```
+
+If we omit the `-s` we will also see subdirectories.
+
+```console
+claw0ry@lnx:~$ sudo du -h /var/log
+4.0K /var/log/runit/ssh
+8.0K /var/log/runit
+52K /var/log/apt
+4.0K /var/log/private
+53M /var/log/journal/ec228e8f22cbefcdced18ace3b891949
+53M /var/log/journal
+16K /var/log/unattended-upgrades
+32K /var/log/amazon/ssm/audits
+852K /var/log/amazon/ssm
+856K /var/log/amazon
+55M /var/log
+```
+
+You can also sort the result to find what takes the most space. Since we are using the `-h` flag on **du** to get human readable sizes, we also provide the same flag to **sort** so that I will sort correctly.
+
+```console
+claw0ry@lnx:~$ sudo du -h /var/log | sort -h
+4.0K /var/log/chrony
+4.0K /var/log/private
+4.0K /var/log/runit/ssh
+8.0K /var/log/runit
+36K /var/log/apt
+136K /var/log/nginx
+440K /var/log/sysstat
+25M /var/log
+25M /var/log/journal
+25M /var/log/journal/b8d53a40a48e4a9baaf67b1d19735980
+```
+
+To get the size of a file(s), you can use the **ls** command with `-lh` to get the size in human readable format.
+
+Another alternative is **ncdu** which is an interactive version of `du` built with ncurses. It does not come pre-installed.
+
+```console
+sudo apt install ncdu
+
+# replace /home with whatevery starting path you want
+ncdu /home
+```
+
+Now you can navigate and see the size of files and directories interactivly.
+
+## Processes
+
+Dashboard overview of your processes, CPU and memory usage.
+```console
+htop
+```
+
+Snapshot of the current process state.
+```console
+ps aux
+```
+
+Kill a process.
+
+```console
+pkill -9 PID
+```
+
+## Schedule tasks
+
+In Linux we use **cron** with a `crontab` file to scedule runs of commands or scripts.
+
+The crontab template looks like this.
+
+```plaintext
+# +----------- minute (0 - 59)
+# | +--------- hour (0 - 23)
+# | | +------- day of month (1 - 31)
+# | | | +----- month (1 - 12)
+# | | | | +--- day of week (0 - 6) (starts at sunday)
+# | | | | |
+# * * * * * COMMAND
+#
+# NOTES
+# The 'month' and 'day of week' can be represented by either a number, name or shortname.
+# e.g 1, January, Jan
+# e.g 1, Monday, Mon
+#
+# Command can be either a script or a command. You can seperate them with ';' to run
+# multiple commands.
+#
+# Visit https://crontab.guru for a visual representation of cron schedule
+```
+
+To edit your crontab you use the **crontab** command.
+
+```console
+crontab -e
+```
+
+You can also specify the user to edit crontab for if you want it to run as another user.
+
+```console
+crontab -u USER -e
+```
+
+> NOTE: When adding commands to your crontab, make sure you use full paths for both commands and scripts.
+
+You can check the cron logs using **journalctl**.
+
+```console
+journalctl --unit cron
+```
+
+## Manage and update packages
+
+### dpkg
+
+```console
+# install .deb package from file
+dpkg -i FILE.deb
+
+# list installed packages
+dpkg -l
+
+# remove package
+dpkg -r NAME
+
+# search installed packages
+dpkg -S PATTERN
+```
+
+NOTE: When intalling packages with `dpkg -i` it does not perform any dependency checks. If a dependency is missing it will fail.
+
+### apt
+
+```console
+# update package list
+apt update
+
+# upgrade installed packages
+apt upgrade
+
+# install package(s)
+apt install NAME
+
+# remove package
+apt remove NAME
+
+# remove package and config
+apt purge NAME
+
+# remove unwanted packages
+apt autoremove
+
+# search packages
+apt search PATTERN
+
+# show package details
+apt show NAME
+
+# list installed packages
+apt list --installed
+
+# list packages that can be upgraded
+apt list --upgradeable
+```
diff --git a/content/posts/cgit_idle_no_data.md b/content/posts/cgit_idle_no_data.md
new file mode 100644
index 0000000..a883e44
--- /dev/null
+++ b/content/posts/cgit_idle_no_data.md
@@ -0,0 +1,79 @@
+---
+title: 'cgit: Idle has no value'
+description: 'How to populate the idle column in cgit if you dont have a default setup'
+date: '2024-10-24T16:00:00+02:00'
+tags: ['cgit', 'git']
+---
+
+I decided to brush-up on my cgit setup today and I realized that the 'idle' column had no data for non of my repositories. There's a reason for this that might not be obvious for many people.
+
+<!--more-->
+
+By default cgit is hard-coded to look at the "master" branch for determining idle time. The reason for this is that this was conventionally the name of the main branch. After the some controversies in mid 2020, Github (and a bunch of others) changed the default main branch name from "master" to "main"[[1]](#reference-1).
+
+In order to tell cgit that we want to use "main" instead, we can either use a repository specific cgitrc file or use the git config file that comes with our repository. Both of these options require you to set `scan-path` in your `/etc/cgitrc` file.
+
+If we search in `man cgitrc`, this is what it tells us:
+
+```
+enable-git-config
+ Flag which, when set to "1", will allow cgit to use git config to set any repo specific settings. This
+ option is used in conjunction with "scan-path", and must be defined prior, to augment repo-specific
+ settings. The keys gitweb.owner, gitweb.category, gitweb.description, and gitweb.homepage will map to the
+ cgit keys repo.owner, repo.section, repo.desc, and repo.homepage respectively. All git config keys that
+ begin with "cgit." will be mapped to the corresponding "repo." key in cgit. Default value: "0". See also:
+ scan-path, section-from-path.
+
+...
+
+REPOSITORY-SPECIFIC CGITRC FILE
+ When the option "scan-path" is used to auto-discover git repositories, cgit will try to parse the file
+ "cgitrc" within any found repository. Such a repo-specific config file may contain any of the repo-specific
+ options described above, except "repo.url" and "repo.path". Additionally, the "filter" options are only
+ acknowledged in repo-specific config files when "enable-filter-overrides" is set to "1".
+
+ Note: the "repo." prefix is dropped from the option names in repo-specific config files, e.g. "repo.desc"
+ becomes "desc".
+```
+
+So let's tell cgit to use "main" as our default branch, instead of the hard-coded "master".
+
+First we need to make sure that `scan-path` is set correctly in our `/etc/cgitrc`. I have all my repositories in `/var/www/git` and so my config looks like this:
+
+```conf
+# ...
+
+scan-path=/var/www/git/
+```
+
+Then we can either use an `cgitrc` file or the `config` file that is generated by git. I myself use the `config` file generated by git, because it's already there. One less file to remember when setting up a new repository. But I will show you both examples.
+
+
+```plaintext
+# /path/to/repo/cgitrc
+
+defbranch = main
+```
+
+or (my prefered way)
+
+```plaintext
+# /path/to/repo/config
+
+# ...
+
+[cgit]
+ defbranch = main
+```
+
+You can of course change "main" to whatever you call your default branch.
+
+## References
+
+{{< rawhtml >}}
+<ol>
+ <li>
+ <a href="https://www.vice.com/en/article/github-to-remove-masterslave-terminology-from-its-platform/" id="reference-1">Github to Remove ‘Master/Slave’ Terminology From its Platform</a> Vice.
+ </li>
+</ol>
+{{< /rawhtml >}}
diff --git a/content/posts/compare-two-dates-in-servicenow.md b/content/posts/compare-two-dates-in-servicenow.md
new file mode 100644
index 0000000..15f053b
--- /dev/null
+++ b/content/posts/compare-two-dates-in-servicenow.md
@@ -0,0 +1,99 @@
+---
+title: 'Compare two dates in ServiceNow'
+date: 2022-05-10T00:00:00+00:00
+draft: false
+---
+
+To work with date and datetime in ServiceNow we can use the [GlideDateTime API](https://developer.servicenow.com/dev.do#!/reference/api/sandiego/server/no-namespace/c_APIRef).
+
+<!--more-->
+
+## Get duration
+
+```javascript
+var date1 = new GlideDateTime('2022-05-10 09:00:00');
+var date2 = new GlideDateTime('2022-05-12 12:00:00');
+
+var diff = GlideDateTime.subtract(date1, date2);
+gs.info(diff.getDisplayValue());
+
+// should print: 2 Days 3 Hours
+```
+
+## Adding/removing
+
+```javascript
+var date1;
+
+// Adding days
+date1 = new GlideDateTime('2022-05-10 09:00:00');
+date1.addDaysUTC(2); // 2022-05-12 09:00:00
+
+// Subtract days
+date1 = new GlideDateTime('2022-05-10 09:00:00');
+date1.addDaysUTC(-2); // 2022-05-08 09:00:00
+
+// Add seconds
+date1 = new GlideDateTime('2022-05-10 09:00:00');
+date1.addSeconds(1000); // 2022-05-10 09:16:40
+
+// Subtract seconds
+date1 = new GlideDateTime('2022-05-10 09:00:00');
+date1.addSeconds(-1000); // 2022-05-10 08:43:20
+```
+
+## Compare datetime
+
+### Simple comparison
+
+```javascript
+var date1 = new GlideDateTime('2022-05-10 09:00:00');
+var date2 = new GlideDateTime('2022-05-12 12:00:00');
+
+if (date1 > date2) {
+ gs.info('date 1 is newer than date 2');
+} else {
+ gs.info('date 1 is older than date 2');
+}
+
+// should print: date 1 is older than date 2
+```
+
+### After/before
+
+```javascript
+var date1 = new GlideDateTime('2022-05-10 09:00:00');
+var date2 = new GlideDateTime('2022-05-12 12:00:00');
+
+if (date1.after(date2)) {
+ gs.info('date 1 is newer than date 2');
+}
+
+if (date1.before(date2)) {
+ gs.info('date 1 is older than date 2');
+}
+
+// should print: date 1 is older than date 2
+```
+
+### Real world example
+
+Let's say we want to log all incidents that hasnt been updated in the last 7 days.
+
+```javascript
+var now = new GlideDateTime();
+var incident = new GlideRecord('incident');
+incident.addActiveQuery();
+incident.query();
+
+while (incident.next()) {
+ var lastUpdatedOn = new GlideDateTime(incident.sys_updated_on);
+ lastUpdatedOn.addDaysUTC(7);
+
+ // if current datetime is after sys_updated_on + 7 days, then we know
+ // that 7 days has passed
+ if (now.after(lastUpdatedOn)) {
+ gs.info('Incident ' + incident.number + ' has not been updated in the last 7 days');
+ }
+}
+```
diff --git a/content/posts/copy-to-clipboard-in-servicenow.md b/content/posts/copy-to-clipboard-in-servicenow.md
new file mode 100644
index 0000000..2937d9f
--- /dev/null
+++ b/content/posts/copy-to-clipboard-in-servicenow.md
@@ -0,0 +1,25 @@
+---
+title: "Copy to Clipboard in Servicenow"
+description: "A simple snippet for copying fields or other values to clipboard from a UI Action in ServiceNow"
+tags: ["ServiceNow", "Javascript"]
+date: 2024-03-22T14:10:35+01:00
+draft: false
+---
+
+Recently I was asked to find a way for users to easily create a string consisting of the task number and short description that they could paste into the time management software. In this short article we're going to take a look at how we can copy something to your system clipboard from a UI Action in ServiceNow.
+
+<!--more-->
+
+## The snippet
+
+```javascript
+if (navigator.clipboard.writeText) {
+ var number = g_form.getValue('number');
+ var short_description = g_form.getValue('short_description');
+ navigator.clipboard.writeText(number + " - " + short_description).then(function() {
+ console.log("Copied!");
+ });
+}
+```
+
+The `if` condition check wether your browser supports this function. Then we retrieve the value of fields `number` and `short_description` using `g_form`. Lastly we asynchronously write the the string to system clipboard. The `navigator.clipboard.writeText` is a native Javascript function and can be used on other sites aswell.
diff --git a/content/posts/deploy-hugo-on-git-push.md b/content/posts/deploy-hugo-on-git-push.md
new file mode 100644
index 0000000..dca0ff7
--- /dev/null
+++ b/content/posts/deploy-hugo-on-git-push.md
@@ -0,0 +1,13 @@
+---
+title: 'Deploy Hugo on Git Push'
+description: ''
+date: 2024-10-24T21:56:05+02:00
+tags: ['hugo', 'git']
+draft: true
+---
+
+1. Create a bare repository on your git server
+2. Create a git hook "post-receive" in that bare repository
+3. On your local machine add the remote repository
+4. Push changes from your local machine to the remote repository
+5. Watch how hugo builds and deploys to server
diff --git a/content/posts/enable-external-booking-of-meeting-rooms.md b/content/posts/enable-external-booking-of-meeting-rooms.md
new file mode 100644
index 0000000..a19a193
--- /dev/null
+++ b/content/posts/enable-external-booking-of-meeting-rooms.md
@@ -0,0 +1,140 @@
+---
+title: "Enable External Booking of Meeting Rooms"
+description: "From time to time you might need to share one or more of your meeting rooms with external users. In this posts we're going through three options for how you can enable this"
+date: 2020-02-14T00:00:00+01:00
+tags: ["powershell", "office365"]
+draft: false
+---
+
+From time to time you might need to share one or more of your meeting rooms with external users. In this posts we're going through three options for how you can enable this. Other options do exists (like connecting an Office 365 tenant to another and so), but this post is meant for the time you need to give temporary access or only to one or more users.
+
+<!--more-->
+
+_Disclaimer: I have only tried this is Exchange Online._
+
+## Enable processing of external meeting messages
+
+For Exchange Online to even consider accepting external messages we must enable `ProcessExternalMeetingMessages` via Powershell.
+
+```powershell
+Set-CalendarProcessing -Identity "Meeting Room Name" `
+-ProcessExternalMeetingMessages $true
+```
+
+This will open up for EVERYONE to be able to book the meeting room. Most likely you'd want to limit it to either some users or domains. We'll take a look at that next.
+
+## Limit external users
+
+There are a few ways we can limit which of our external users that can book the meeting room.
+
+* Mail-enabled Security Groups
+* Transport Rules
+* Set-CalendarProcessing
+
+### 1. Mail-enabled Security Groups
+
+
+> NOTE: I assume you already have either a distribution group or mail-enabled security group for all your internal users or know how to create one.
+
+The first solution we're looking at is mail-enabled security groups in Office 365. Most likely you already have some sort of distribution group or mail-enabled security group with all of your internal users. In addition we're going to add another mail-enabled security group for our external users. This is only viable if the amount of users is not too high, because you must add/remove them manually.
+
+Now, this is **IMPORTANT**. You must create the mail-enabled security group in Office 365 directly. If you have a hybrid setup and create the group in Active Directory, you will not be able to add the external user later.
+
+#### Create group in Office 365
+
+1. Go to **Groups -> Add a group**
+2. Choose **Mail-enabled security**
+3. Name your group something appropriate (e.g "Meeting Room External Booking") and give it a description
+4. Give it an email address
+5. Click on **Create group**
+
+#### Set BookInPolicy on meeting room
+
+Our next step is to add the group we just created to the `BookInPolicy` for our meeting room.
+
+Remember I assumed you had a group for all your internal users? Well, we're going to use it now (in this example we'll call it "Company Internal Users"). If your meeting room does not already have a BookInPolicy we must add both groups (internals and externals) and set `AllBookInPolicy` to false.
+
+
+```powershell
+Set-CalendarProcessing -Identity "Meeting Room Name" `
+-AllBookInPolicy $false `
+-BookInPolicy "Company Internal Users", "Meeting Room External Booking"
+```
+
+This will set both groups as BookInPolicy meaning that only users that is member of either groups are allowed for booking. Everyone else will get a rejected message.
+
+#### Add external user to our group
+
+Let's finish off by adding our external users. We do so by invited them in Azure Active Directory as guests and then add them to the mail-enabled security group we created earlier.
+
+
+1. Go to **Azure Active Directory -> Users**
+2. Click on **New guest user**
+3. Make sure **Invite user** is checked
+4. Fill in necessary information
+ 1. **Name:** DisplayName
+ 2. **Email Address:** The guest's external email address
+ 3. **Groups:** Unfortunately you cannot select our group from here, so leave this as default
+ 4. **Role:** User
+ 5. **Block sign in:** Must be "No"
+5. Click on **Invite**
+6. Go back to **Office 365 -> Groups**
+7. Click on "Meeting Room External Booking" and go to **Members** tab
+8. Click **View and manage members**
+9. Click **+ Add members**
+10. Select your invited guest(s) in the list and click **Save**
+
+They should now be able to book the meeting room this was setup for, even though they are external.
+
+### 2. Transport Rules
+
+> NOTE: This only works if `AllBookInPolicy` is true and `BookInPolicy` is empty.
+
+Since we enabled `ProcessExternalMeetingMessages` everyone has access to book our meeting room. Instead of limited whom by using groups we can create a transport rule that states something in the line of: "If email is to this meeting room and is from outside, delete it. Unless its from @domain.com/user@domain.com".
+
+#### Powershell
+
+This can easily be achieved with Powershell. Change `SentTo` and `ExceptIfSenderDomainIs` with your values.
+
+```powershell
+New-TransportRule -Name "Meeting Room External Booking" `
+-SentTo "meeting_room@contoso.com" `
+-ExceptIfSenderDomainIs external-corp.com `
+-FromScope NotInOrganization `
+-DeleteMessage $true
+```
+
+#### Exchange Admin Center
+
+If you're not comfortable with Powershell, you can do it in the Exchange Admin Center in Office 365.
+
+1. Choose **mail flow** in the menu to the left
+2. Make sure you're on the **rules** tab
+3. Click the **+**-symbol and choose **Create a new rule**
+
+![External Booking Transport Rule](/images/external-booking-transport-rule.png)
+
+Save and you're done.
+
+### 3. Set-CalendarProcessing
+
+Our last option is to set the external user(s) directly in Powershell with `Set-CalendarProcessing`.
+
+
+> NOTE: This does not support whole domains only single users, just like using a group.
+
+What we will do is to add our external user(s) to the BookInPolicy explicit instead of using a group like in the first option. Using a group is a lot more mangeable since you can just add/remove user(s) in the group and it also give better visibility. But if you for some reason wont use a group, this is how to set it with set-calendarprocessing.
+
+Note that we must disable `AllBookInPolicy` for the `BookInPolicy` to be activated. That means you have to add a group of everyone else that also should be able to book our meeting room, just like in option 1 where we added "Company Internal Users".
+
+```powershell
+Set-CalendarProcessing -Identity "Meeting Room Name" `
+-AllBookInPolicy $false `
+-BookInPolicy "Company Internal Users", "user@external-corp.com"
+```
+
+Now user@external-corp.com should be able to book our meeting room.
+
+## Conclusion
+
+We have looked at three options for how we can allow external users to book our meeting room. Out of these options, I would recommend option 1 if there's only specific users. Options 2 would be good for allowing whole domains. Options 3 is just an option. It works, but it does not have any benefits over the others, which are easier to manage.
diff --git a/content/posts/exchange-online-check-your-tenant-for-forwarding-rules.md b/content/posts/exchange-online-check-your-tenant-for-forwarding-rules.md
new file mode 100644
index 0000000..fbfe3d6
--- /dev/null
+++ b/content/posts/exchange-online-check-your-tenant-for-forwarding-rules.md
@@ -0,0 +1,106 @@
+---
+title: "Exchange Online: Check your tenant for forwarding rules"
+description: "In this guide we’ll take a look at how you can scan your tenant for “hidden” forwarding rules by using Powershell with Exchange Online."
+date: 2018-12-07T00:00:00+01:00
+tags: ["powershell", "office365"]
+draft: false
+---
+
+In this guide we’ll take a look at how you can scan your tenant for “hidden” forwarding rules by using Powershell with Exchange Online.
+
+<!--more-->
+
+## Why
+
+One technique that is common amongs hackers, that gain access to email accounts, is to setup a forwarding rule for all incomming email. That way they can read all new emails sent to the victim without beeing flagget or detected by audit logs. They can also create rules for emails from a specific address that go to a folder the hacker controls, ie password reset emails. They can then request password change for a numerous different services without the user beeing alerted right away and maybe get further into your system. Especially if its cloud based.
+
+Or, it could be an unfaithful employee that forward emails to a competitor. Either way its critical.
+
+## How
+
+Exchange Online provides two methods for creating forwarding rules; Inbox Settings and Inbox Rules.
+
+### Inbox Settings
+
+In Exchange Online there’s a specific setting that activates forwarding. In OWA (Outlook Web App) you can find it by browsing to **Mail → Accounts → Forwarding**. In EAC (Exchange Admin Center) open user mailbox then go to **Mailbox Features → Mail Flow → Delivery Options**.
+
+### Inbox Rules
+
+ > A rule is an action that Outlook Web App runs automatically on incoming or outgoing messages. For example, you can create a rule to automatically move all email sent to a group you are a member of to a specific folder, or to delete all messages with “Buy now” in the subject.
+
+This is how Microsoft referers to inbox rules. Basically they are rules that are executed on incomming email and we are going to look for those who does forwarding.
+
+## How to scan your tenant
+
+First, let’s scan for inbox settings.
+
+```powershell
+Get-Mailbox -ResultSize unlimited |
+Where-Object { ($_.ForwardingSMTPAddress -ne $null) -or ($_.ForwardingAddress -ne $null) } |
+Select-Object Name, ForwardingSMTPAddress, ForwardingAddress, DeliverToMailboxAndForward
+```
+
+**Line 1:** We collect all mailboxes in our tenant. ResultSize is by default 1000, so if your have more mailboxes you must include -ResultSize unlimited.
+
+**Line 2:** We will filter out mailboxes which does not have the ‘ForwardingSMTPAddress’ or ‘ForwardingAddress’ attribute set. In other words email forwarding is not configured through settings.
+
+Now let’s scan for inbox rules, which is a little more complicated.
+
+```powershell
+$Mailboxes = Get-Mailbox -ResultSize unlimited
+ForEach ($Mailbox in $Mailboxes) {
+ $MailboxWithRule = Get-InboxRule -Mailbox $Mailbox.Alias | Where-Object { ($_.RedirectTo -ne $null) -or ($_.ForwardTo -ne $null) -or ($_.ForwardAsAttachmentTo -ne $null) }
+ If ($MailboxWithRule -ne $Null) {
+ Write-Host "Mailbox $($Mailbox.PrimarySmtpAddress) has the following rules configured:"
+ $MailboxWithRule | Format-List Name, Identity, RedirectTo, ForwardTo, ForwardAsAttachmentTo
+ }
+}
+```
+
+**Line 1:** We collect all mailboxes in our tenant. ResultSize is by default 1000, so if your have more mailboxes then that you must include -ResultSize unlimited.
+
+**Line 2:** We loop through each mailbox.
+
+**Line 3:** We fetch all inbox rules for current mailbox and filter out those who does not contain a RedirectTo, ForwardTo or ForwardAsAttachment value. _NOTE: This cmdlet is pretty slow._
+
+**Line 4-7:** If line 3 returns any rules, we will print the address of the mailbox and return all the rules we found.
+
+The output looks a little something like this:
+
+```plaintext
+PS C:\Users\dotsh> .\ExportExchangeForwardingInboxRules.ps1
+...
+Mailbox user@contoso.com has the following rules configured:
+
+Name : SilentForwarding
+PrimarySmtpAddress : user@contoso.com
+Identity : Example User 1\8573016583648562395
+RedirectTo :
+ForwardTo : {"Malicous User" [SMTP:hacker@l33t.com]}
+ForwardAsAttachmentTo :
+
+Name : MyRule
+PrimarySmtpAddress : user@contoso.com
+Identity : Example User 1\8573016583648562395
+RedirectTo :
+ForwardTo : {"Example User 2" [SMTP:user2@contoso.com], "Example User 3" [SMTP:user3@contoso.com],
+ "Example User 4" [SMTP:user4@contoso.com]}
+ForwardAsAttachmentTo :
+
+...
+```
+
+## Finishing up
+
+Often you would forward these results to head of security or something. So to make it easier for them to read the report we could export it to an CSV thats opens up in Excel. The easiest way to do this is to put each script in its own file, ie `ExportExchangeForwardingInboxSettings.ps1` and `ExportExchangeForwardingInboxRules.ps1`. Then we could call it like this to export it to CSV.
+
+```plaintext
+PS C:\Users\dotsh> .\ExportExchangeForwardingInboxSettings.ps1 | Export-Csv -Encoding UTF8 -Delimiter ";" -NoTypeInformation -Path "ExchangeForwardingInboxSettings-Report.csv"
+...
+PS C:\Users\dotsh> .\ExportExchangeForwardingInboxRules.ps1 | Export-Csv -Encoding UTF8 -Delimiter ";" -NoTypeInformation -Path "ExchangeForwardingInboxRules-Report.csv"
+...
+```
+
+By using `-Delimiter ';'` Excel will automatically format the table correctly for us.
+
+Now its just for you to send the report to the right person and you may have avoided a massive data breach.
diff --git a/content/posts/generate-access-tokens-for-microsoft-services-with-powershell.md b/content/posts/generate-access-tokens-for-microsoft-services-with-powershell.md
new file mode 100644
index 0000000..1eccb71
--- /dev/null
+++ b/content/posts/generate-access-tokens-for-microsoft-services-with-powershell.md
@@ -0,0 +1,82 @@
+---
+title: "Generate Access Tokens for Microsoft Services With Powershell"
+date: 2022-02-09T15:50:44+01:00
+tags: ["powershell", "REST", "api", "microsoft"]
+draft: false
+---
+
+As automators we often need to interact with REST API's and if you are working with Microsoft Azure you probably found yourself dealing with several of Microsoft's services i.e Microsoft Graph, Azure Resource Manager or Partner Center. Many of these services is supported by a Powershell module that handles authentication etc.. But I have found lately that more often than not it's actually easier to just work with the raw REST API, especially for cross-platform development. In this article we're going to take a look at two flows for how we can authenticate with the different services.
+
+<!--more-->
+
+## The baseline
+
+The baseline for all requests for an access token to Microsoft services is this:
+
+* The URL is `https://login.microsoftonline.com/<tenant_id>/oauth2/v2.0/token`
+* The Method is `POST`
+* The Content-Type is `application/x-www-form-urlencoded`
+* The request body requires these parameters
+ * client_id
+ * client_secret
+ * grant_type
+ * scope
+
+In the response you will get an `access_token` which you include in the request header as `Authorization: Bearer <access_token>` in subsequent requests.
+
+## Client Credentials
+
+This is the flow you most likely will use if you are authenticating as a service principal. For the client credentials flow we need to set `grant_type` to `client_credentials`.
+
+```powershell
+$reqBody = @{
+ client_id = "<client_id>"
+ client_secret = "<client_secret>"
+ grant_type = "client_credentials"
+ scope = "https://graph.microsoft.com/.default"
+}
+
+$params = @{
+ Uri = "https://login.microsoftonline.com/{0}/oauth2/v2.0/token" -f ("<tenant_id>")
+ Method = "POST"
+ ContentType = "application/x-www-form-urlencoded"
+ Body = $reqBody
+}
+
+$token = Invoke-RestMethod @params
+```
+
+## Refresh Token
+
+This is the flow you most likely will use if you need to authenticate on behalf of a user that requires MFA. In addition to changing `grant_type` to `refresh_token` we also need to provide a refresh token.
+
+```powershell
+$reqBody = @{
+ client_id = "<client_id>"
+ client_secret = "<client_secret>"
+ grant_type = "refresh_token"
+ refresh_token = "<your_refresh_token>"
+ scope = "https://graph.microsoft.com/.default"
+}
+
+$params = @{
+ Uri = "https://login.microsoftonline.com/{0}/oauth2/v2.0/token" -f ("<tenant_id>")
+ Method = "POST"
+ ContentType = "application/x-www-form-urlencoded"
+ Body = $reqBody
+}
+
+$token = Invoke-RestMethod @params
+```
+
+## Scopes
+
+So, how do we use this to authenticate to the different services? Up until now we only authenticated with Microsoft Graph, meaning that our `access_token` will only work with request to Microsoft Graph REST API. To authenticate with other services we simply change the scope. Here are some usefull scopes:
+
+* https://graph.microsoft.com/.default (Microsoft Graph API)
+* https://management.azure.com/.default (Azure Resource Manager API)
+* https://api.partnercenter.microsoft.com/user_impersonation (Partner Center REST API, requires a refresh_token)
+
+## Wrap-Up
+
+Micrsoft supports several authentication flows and you can read more indept about them here: [MSAL Authentication Flows](https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-authentication-flows).
diff --git a/content/posts/generate-microsoft-partner-center-refresh-token.md b/content/posts/generate-microsoft-partner-center-refresh-token.md
new file mode 100644
index 0000000..2b8fe97
--- /dev/null
+++ b/content/posts/generate-microsoft-partner-center-refresh-token.md
@@ -0,0 +1,86 @@
+---
+title: "Generate Microsoft Partner Center Refresh Token"
+date: 2021-11-03T23:22:03+01:00
+tags: ["powershell", "partner-center"]
+draft: false
+---
+
+Microsoft Partner Center is a portal where you can manage all of your CSP customers and can give you a lot of access and power to do so. Therefor you should naturally have great security on the users that has access to this portal. Like MFA for example.
+
+<!--more-->
+
+Hopefully you have MFA enabled on all your Partner Center users, as you should. But MFA does not work great with unattended authentication, like in scripts for example. So how can we then do unattended authentication and automate some of the tasks in Partner Center?
+
+In the Partner Center you can create something they call "Web apps" or "Native apps", which works like a service principal, but they will not give you access to your customers data. For that, you will need to authenticate as a Partner Center user that has either "Admin agent", "Sales agent" or "Helpdesk agent" (depending on access level) in addition to using an Azure service principal. They call this "App + User authentication".
+
+To use the Partner Center SDK or REST API with these permissions, and without having to use MFA all the time, you must create a refresh token. Creating this refresh token is a manual one time job and it will be valid for 90 days. When used it will reset the timer, but if not used for 90 days it will expire. Let's see how we can generate a refresh token. In this example we're going to use Powershell with the Partner Center Powershell module.
+
+## 1. Create an Azure Service Principal
+
+The first step is to create an Azure service principal in the same tenant as your Partner Center.
+
+1. Create a new App registration
+2. Generate a secret
+
+## 2. Generate refresh token
+
+When the service principal is in-place your can generate a refresh token by combining the service principal with your Partner Center user credentials.
+
+>NOTE: `$appId`, `$appSecret` and `refreshToken` should be stored safely, like in Azure Key Vault, and fetched from there instead of hardcoded for security reasons.
+
+```powershell
+$appId = "" # Service principal app id
+$appSecret = ConvertTo-SecureString -String "" -AsPlainText # Service principal secret
+$tenantId = "" # Partner Center tenant id
+$credential = [PSCredential]::new($appId, $appSecret)
+
+$tokenSplat = @{
+ ApplicationId = $appId
+ Credential = $credential
+ Scopes = "https://api.partnercenter.microsoft.com/user_impersonation"
+ ServicePrincipal = $true
+ TenantId = $tenantId
+ UseAuthorizationCode = $true
+}
+
+$token = New-PartnerAccessToken @tokenSplat
+```
+
+This will open a new tab in your browser and ask you to login. Now you must login with your Partner Center user credentials. When that is done, and if successfull, your refresh token is now stored in `$token` and can be accessed like `$token.RefreshToken`. This will also give you an access token if you want to work with the REST API directly. This access token must be included in the `Authorization` header as `Bearer <accessToken>`. Its not needed when using the PartnerCenter Powershell module.
+
+## 3. Connect to Partner Center
+
+Now to use this refresh token to authenticate to Partner Center we do this.
+
+```powershell
+$connectSplat = @{
+ ApplicationId = $appId
+ Credential = $credential
+ RefreshToken = $token.RefreshToken
+}
+
+Connect-PartnerCenter @connectSplat
+```
+
+You should now be logged in with the same permissions as your Partner Center user.
+
+## 4. Generate new access token
+
+If you already have a refresh token, you can generate a new access token (as they only live for 1 hour) by running the `New-PartnerAccessToken` cmdlet with a different set of parameters.
+
+Use the previous `$appId`, `$credential` and `$tenantId` from when you generated the refresh token.
+
+```powershell
+$refreshToken = "<refresh_token>"
+$tokenSplat = @{
+ ApplicationId = $appId
+ Credential = $credential
+ Scopes = "https://api.partnercenter.microsoft.com/user_impersonation"
+ ServicePrincipal = $true
+ TenantId = $tenantId
+ RefreshToken = $refreshToken
+}
+$newToken = New-PartnerAccessToken @tokenSplat
+```
+
+This will give you a new access token without you having to login with MFA. It will also give a new refresh token. This is because refresh tokens from Partner Center only lives for 90 days, so you must updated your refresh token before it expires. If you are storing the refresh token in like Azure Key Vault its a good idea to update the refresh token when generating a new access token. Then you will always have an up-to-date refresh token.
diff --git a/content/posts/get-array-of-unique-objects-in-servicenow.md b/content/posts/get-array-of-unique-objects-in-servicenow.md
new file mode 100644
index 0000000..c85692b
--- /dev/null
+++ b/content/posts/get-array-of-unique-objects-in-servicenow.md
@@ -0,0 +1,55 @@
+---
+title: "Get Array of Unique Objects in Servicenow"
+description: "How to get only unique objects from an array of objects in ServiceNow"
+date: 2024-06-14T12:29:18+02:00
+tags: ['servicenow', 'javascript']
+---
+
+There's no secret that the Javascript support (serverside) in ServiecNow is lacking. And today I came across another little quirk. In ServiceNow we have the `ArrayUtil.unique()` to get an Array of unique values, but this does not support objects. Neither does ServiceNow support `Map()` or `Set()` on the serverside, so here's a little snippet to filter an array of objects and receive unique objects based on a object `key`.
+
+<!--more-->
+
+```javascript
+/**
+ * Get unique objects from an array of object, by key.
+ *
+ * @param {Array.<Object>} arr - The array of objects
+ * @param {string} key - The object key that must be unique
+ * @returns {Array.<Object>} An array of unique objects
+ */
+function uniqueObjects(arr, key) {
+ return arr.filter(function(value, index, self) {
+ return self.map(function(x) {
+ return x[key];
+ }).indexOf(value[key]) == index;
+ });
+}
+```
+
+Here's a example from a fresh developer instance.
+
+```javascript
+var managers = [];
+var gr = new GlideRecord('sys_user');
+gr.addActiveQuery();
+gr.addNotNullQuery('manager');
+gr.query();
+
+while(gr.next()) {
+ var managerId = gr.getValue('manager');
+ var managerUsername = gr.manager.user_name.getValue();
+ managers.push({
+ userId: managerId,
+ username: managerUsername,
+ });
+}
+
+var uniqueManagers = uniqueObjects(managers, 'userId');
+
+gs.print(managers.length);
+gs.print(uniqueManagers.length);
+
+// ---------- OUTPUT ----------
+// *** Script: 13
+// *** Script: 11
+```
diff --git a/content/posts/get-status-code-for-failed-webrequests-in-powershell.md b/content/posts/get-status-code-for-failed-webrequests-in-powershell.md
new file mode 100644
index 0000000..12c805f
--- /dev/null
+++ b/content/posts/get-status-code-for-failed-webrequests-in-powershell.md
@@ -0,0 +1,88 @@
+---
+title: 'Get status code for failed webrequests in Powershell'
+description: "If you are sending web requests with Powershell you will notice that if your requests fails, that is if it returns any status code other than 2xx, it will thrown an error. Now, how do get the details of the failed request?"
+tags: ['powershell']
+date: 2021-09-23T21:35:29+02:00
+draft: false
+---
+
+If you are sending web requests with Powershell you will notice that if your requests fails, that is if it returns any status code other than 2xx, it will thrown an error. Now, how do get the details of the failed request?
+
+<!--more-->
+
+## StatusCode
+
+In Powershell, when you use `Invoke-WebRequest` or `Invoke-RestMethod`, it will give you details about the failed request in the `$_.Exception.Response` object. Let's say you want to know if the error is because of a bad request or an internal server error, you can do this:
+
+```powershell
+try {
+ $response = Invoke-RestMethod -Uri "https://jsonplaceholder.typicode.com/users/11"
+} catch {
+ $StatusCode = $_.Exception.Response.StatusCode
+
+ if ($StatusCode -eq [System.Net.HttpStatusCode]::NotFound) {
+ Write-Error "User was not found!"
+ } elseif ($StatusCode -eq [System.Net.HttpStatusCode]::InternalServerError) {
+ Write-Error "InternalServerError: Something went wrong on the backend!"
+ } else {
+ Write-Error "Expected 200, got $([int]$StatusCode)"
+ }
+}
+```
+
+If you want to compare the status code as an integer you can just cast the status code, which is as you can see from the example above, a System.Net.HttpStatusCode enum.
+
+```powershell
+try {
+ $response = Invoke-RestMethod -Uri "https://jsonplaceholder.typicode.com/users/11"
+} catch {
+ $StatusCode = [int]$_.Exception.Response.StatusCode
+
+ if ($StatusCode -eq 404) {
+ Write-Error "User was not found!"
+ } elseif ($StatusCode -eq 500) {
+ Write-Error "InternalServerError: Something went wrong on the backend!"
+ } else {
+ Write-Error "Expected 200, got $([int]$StatusCode)"
+ }
+}
+```
+
+As always when using namespaces in Powershell you can define them at top and it will save you some typing.
+
+```powershell
+using namespace System.Net
+
+try {
+ $response = Invoke-RestMethod -Uri "https://jsonplaceholder.typicode.com/users/11"
+} catch {
+ $StatusCode = $_.Exception.Response.StatusCode
+
+ if ($StatusCode -eq [HttpStatusCode]::NotFound) {
+ Write-Error "User was not found!"
+ } elseif ($StatusCode -eq [HttpStatusCode]::InternalServerError) {
+ Write-Error "InternalServerError: Something went wrong on the backend!"
+ } else {
+ Write-Error "Expected 200, got $([int]$StatusCode)"
+ }
+}
+```
+
+## Response content
+
+Many API's will give you additional information in the response body when a request fails. This content is not stored i the `$_.Exception.Response` object, but in `$_.ErrorDetails.Message`. To simply our previous example we can do this.
+
+NOTE: `https://jsonplaceholder.typicode.com` does not return status code or an additional response body when failing, but in theory if your API supports it, this is how you would extract that information.
+
+```powershell
+using namespace System.Net
+
+try {
+ $response = Invoke-RestMethod -Uri "https://jsonplaceholder.typicode.com/users/0"
+} catch {
+ $StatusCode = $_.Exception.Response.StatusCode
+ $ErrorMessage = $_.ErrorDetails.Message
+
+ Write-Error "$([int]$StatusCode) $($StatusCode) - $($ErrorMessage)"
+}
+```
diff --git a/content/posts/get-type-definition-in-powershell.md b/content/posts/get-type-definition-in-powershell.md
new file mode 100644
index 0000000..7348075
--- /dev/null
+++ b/content/posts/get-type-definition-in-powershell.md
@@ -0,0 +1,215 @@
+---
+title: 'Get type definition in Powershell'
+description: "Today I went back to some Powershell scripting with the Az module and it frustrated me that I wasn't easily able to know what properties `Get-AzADGroup` (or any of the other Az cmdlets) returned to me without actually invoking the cmdlet."
+tags: ['powershell']
+date: 2021-08-27T09:43:57+02:00
+draft: false
+---
+
+Today I went back to some Powershell scripting with the Az module and it frustrated me that I wasn't easily able to know what properties `Get-AzADGroup` (or any of the other Az cmdlets) returned to me without actually invoking the cmdlet. E.g I dont want to invoke `New-AzADGroup` just to be able to see what properties it will give me so I can use that in my script. Previously I've relied on IntelliSens in my editor, but it often fails, so I sought out to find a more manual solution (who would have thought..).
+
+<!--more-->
+
+Some modules provide information in their help about what type a particular cmdlet returns. We can use this to get information about what that type contains.
+
+If we look at the `Get-AzADGroup` cmdlet as an example. When we run `Get-Help Get-AzADGroup -Full` we can see under the "OUTPUT" section that it returns a `Microsoft.Azure.Commands.ActiveDirectory.PSADGroup` type. To inspect that type we can do the following.
+
+```powershell
+PS > Get-Help Get-AzADGroup -Full
+...
+
+OUTPUTS
+ Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+
+...
+PS > [Microsoft.Azure.Commands.ActiveDirectory.PSADGroup]::new() | gm
+
+ TypeName: Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+
+Name MemberType Definition
+---- ---------- ----------
+Equals Method bool Equals(System.Object obj)
+GetHashCode Method int GetHashCode()
+GetType Method type GetType()
+ToString Method string ToString()
+AdditionalProperties Property System.Collections.Generic.IDictionary[string,System.Object] AdditionalProperties {get;set;}
+DeletionTimestamp Property System.Nullable[datetime] DeletionTimestamp {get;set;}
+Description Property string Description {get;set;}
+DisplayName Property string DisplayName {get;set;}
+Id Property string Id {get;set;}
+MailEnabled Property System.Nullable[bool] MailEnabled {get;set;}
+MailNickname Property string MailNickname {get;set;}
+ObjectType Property string ObjectType {get;}
+SecurityEnabled Property System.Nullable[bool] SecurityEnabled {get;set;}
+Type Property string Type {get;set;}
+
+```
+
+Here we can see what methods and properties we will get if we run `Get-AzADGroup`.
+
+Another solution is to call either the `GetMembers()`, `GetProperties()` or `GetMethods()` on the type which will give you detailed information about each member. It's a little verbose to be frank.. So I prefer the first method.
+
+```powershell
+PS > [Microsoft.Azure.Commands.ActiveDirectory.PSADGroup].GetProperties()
+
+MemberType : Property
+Name : SecurityEnabled
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876641
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.Nullable`1[System.Boolean]
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.Nullable`1[System.Boolean] get_SecurityEnabled()
+SetMethod : Void set_SecurityEnabled(System.Nullable`1[System.Boolean])
+CustomAttributes : {}
+
+MemberType : Property
+Name : MailEnabled
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876642
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.Nullable`1[System.Boolean]
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.Nullable`1[System.Boolean] get_MailEnabled()
+SetMethod : Void set_MailEnabled(System.Nullable`1[System.Boolean])
+CustomAttributes : {}
+
+MemberType : Property
+Name : MailNickname
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876643
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.String
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.String get_MailNickname()
+SetMethod : Void set_MailNickname(System.String)
+CustomAttributes : {}
+
+MemberType : Property
+Name : ObjectType
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876644
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.String
+Attributes : None
+CanRead : True
+CanWrite : False
+IsSpecialName : False
+GetMethod : System.String get_ObjectType()
+SetMethod :
+CustomAttributes : {}
+
+MemberType : Property
+Name : Description
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876645
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.String
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.String get_Description()
+SetMethod : Void set_Description(System.String)
+CustomAttributes : {}
+
+MemberType : Property
+Name : DisplayName
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADObject
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876650
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.String
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.String get_DisplayName()
+SetMethod : Void set_DisplayName(System.String)
+CustomAttributes : {}
+
+MemberType : Property
+Name : Id
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADObject
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876651
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.String
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.String get_Id()
+SetMethod : Void set_Id(System.String)
+CustomAttributes : {}
+
+MemberType : Property
+Name : Type
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADObject
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876652
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.String
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.String get_Type()
+SetMethod : Void set_Type(System.String)
+CustomAttributes : {}
+
+MemberType : Property
+Name : DeletionTimestamp
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADObject
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876653
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.Nullable`1[System.DateTime]
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.Nullable`1[System.DateTime] get_DeletionTimestamp()
+SetMethod : Void set_DeletionTimestamp(System.Nullable`1[System.DateTime])
+CustomAttributes : {}
+
+MemberType : Property
+Name : AdditionalProperties
+DeclaringType : Microsoft.Azure.Commands.ActiveDirectory.PSADObject
+ReflectedType : Microsoft.Azure.Commands.ActiveDirectory.PSADGroup
+MetadataToken : 385876654
+Module : Microsoft.Azure.PowerShell.Cmdlets.Resources.dll
+IsCollectible : False
+PropertyType : System.Collections.Generic.IDictionary`2[System.String,System.Object]
+Attributes : None
+CanRead : True
+CanWrite : True
+IsSpecialName : False
+GetMethod : System.Collections.Generic.IDictionary`2[System.String,System.Object] get_AdditionalProperties()
+SetMethod : Void set_AdditionalProperties(System.Collections.Generic.IDictionary`2[System.String,System.Object])
+CustomAttributes : {}
+
+```
diff --git a/content/posts/get_changed_fields_in_server_scripts_in_servicenow.md b/content/posts/get_changed_fields_in_server_scripts_in_servicenow.md
new file mode 100644
index 0000000..7242639
--- /dev/null
+++ b/content/posts/get_changed_fields_in_server_scripts_in_servicenow.md
@@ -0,0 +1,36 @@
+---
+title: "Get changed fields in server scripts in ServiceNow"
+description: "In ServiceNow you often write Business Rules or some other logic, based on fields that has been changed/updated. For the most part, this can be done via GUI, but sometimes you have to resort to some scripting."
+date: 2021-08-17T10:04:06+02:00
+tags: ['servicenow', 'javascript']
+draft: false
+---
+
+In ServiceNow you often write Business Rules or some other logic, based on fields that has been changed/updated. For the most part, this can be done via GUI, but sometimes you have to resort to some scripting. If you ever need to get which fields has been changed/updated, e.g in an advanced filter, this is how you check for it.
+
+<!--more-->
+
+```javascript
+(function(current){
+ var gru = GlideScriptRecordUtil.get(current);
+
+ // Returns an arrayList of changed field elements with friendly names
+ var changedFields = gru.getChangedFields();
+
+ //Returns an arrayList of changed field elements with database names
+ var changedFieldNames = gru.getChangedFieldNames();
+
+ //Returns an arrayList of all change values from changed fields
+ var changes = gru.getChanges();
+
+ // Convert to JavaScript Arrays
+ gs.include('j2js');
+ changedFields = j2js(changedFields);
+ changedFieldNames = j2js(changedFieldNames);
+ changeds = j2js(changes);
+
+ gs.info("Changed Fields: " + JSON.stringify(changedFields));
+ gs.info("Changed Field Names: " + JSON.stringify(changedFieldNames));
+ gs.info("Changes: " + JSON.stringify(changes));
+})(current);
+```
diff --git a/content/posts/getting-started-with-azure-functions.md b/content/posts/getting-started-with-azure-functions.md
new file mode 100644
index 0000000..b4e1dbe
--- /dev/null
+++ b/content/posts/getting-started-with-azure-functions.md
@@ -0,0 +1,231 @@
+---
+title: 'Getting Started With Azure Functions'
+description: "Azure Functions is one of Microsofts serverless services that you can setup in Azure. Being serverless means that you dont have to worry about the infrastructure and environment behind it and you will only pay for the capacity that you actually use when the function is running."
+tags: ['javascript', 'azure']
+date: 2020-02-19T00:00:00+01:00
+draft: false
+---
+
+Azure Functions is one of Microsofts serverless services that you can setup in Azure. Being serverless means that you dont have to worry about the infrastructure and environment behind it and you will only pay for the capacity that you actually use when the function is running. Traditionally you would have a server that runs 24/7 and consume capacity. Very simplified, serverless will spin up and down as requests comes in. This means that you only have to focus on the code it should run.
+
+<!--more-->
+
+## Prerequisites
+
+1. An Azure Subscription
+2. Azure CLI ([Install the Azure CLI](https://docs.microsoft.com/nb-no/cli/azure/install-azure-cli?view=azure-cli-latest))
+3. Node.js and npm
+
+You can register for a free Azure subscription here: [https://azure.microsoft.com/en-us/free/](https://azure.microsoft.com/en-us/free/)
+
+## Introduction
+
+In this article I will walk you through the following subjects.
+
+1. Create an Azure Function
+2. Install Azure Functions Core Tools and create a new local function
+3. Publish function to Azure Function
+
+## Create an Azure Function
+
+### Connecting to Azure
+
+The first task is to login to Azure through our CLI.
+
+```bash
+az login
+```
+
+This will try to open your browser and direct you to a login page, if not you will be instructed to go to [https://microsoft.com/devicelogin](https://microsoft.com/devicelogin) and type in a token that is also provided.
+
+```bash
+dotpwsh@demo:~$ az login
+WARNING: To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code <TOKEN> to authenticate.
+```
+
+Now that we're logged in we can start working with Azure.
+
+### Preparation
+
+To setup an Azure Function we're going to need the following resources from Azure.
+
+- Resource Group
+- Storage Account
+- Azure Function
+
+Each of them is going to need, among other things, a name and location. To make it easier for us let's prep some variables.
+
+```bash
+LOCATION=westeurope
+RESOURCE_NAME=af-test-rg
+STORAGE_NAME=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12)-af-test-sa #E.g SdE4npmV81ok-af-test-sa
+FUNCTION_NAME=$(head /dev/urandom | tr -dc A-Za-z0-9 | head -c 12)-test-af #E.g 48aDF9HYfiWY-test-af
+```
+
+> It's a good practice to append a shortcode to the end of Azure resource name's that says something about what it is. E.g 'sa' for storage account, 'rg' for resource group, 'af' for Azure Functions and so on.
+
+Every resource in Azure needs to live somewhere, so we have to specify a location. In this example I'm going to use `westeurope` for all the resources because it's the closest for me, but you should choose something close to you.
+
+To list available locations you can run these two commands.
+
+```bash
+az account list-locations -o table
+az functionapp list-consumption-locations -o table
+```
+
+We also create a name variable for each of our resources. The resource group name only needs to be unique within your subscription, but both storage account and function app names must be globally unique. Therefor we're append a random string of characters infront of it. The storage account name can only be between 3 and 24 characters long.
+
+How you choose to generate the random string is up to you.
+
+### Create a Resource Group
+
+Briefly, every service in Azure must belong to a resource group. You can have one or more services within a resource group. For the most part you want to use resource groups as a logical collection of services related to a project or deployment. E.g a website might need an App Service, Storage and some Analytics. Services related to that specific website should reside in the same resource group, so when the day come that you want to decomission it, you can just delete the whole resource group. It also gives you a better overview of which services are related.
+
+Let's start by creating our resource group.
+
+```bash
+az group create --name $RESOURCE_NAME --location $LOCATION
+```
+
+The `--name` parameter is what we would like to call our resource group. As mentioned this must be unique within your subscription. The `--location` parameter is where we want our resource group to be stored. Since we have already decided on the name and location, we can use the variables we setup earlier.
+
+### Create our Storage Account
+
+Next, we need a storage account where our files (code and modules) will be stored. Azure Functions does not include storage on its own.
+
+```bash
+az storage account create \
+--name $STORAGE_NAME \
+--location $LOCATION \
+--resource-group $RESOURCE_NAME \
+--sku Standard_LRS
+```
+
+The only thing here that we havent defined beforehand is the type of storage we want. The `--sku` paramter is how we define which type we want. Here we use the simplest one which is 'Standard_LRS'.
+
+### Create our Function app
+
+Lastly, we're going to create our Azure Function app.
+
+```bash
+az functionapp create \
+--name $FUNCTION_NAME \
+--resource-group $RESOURCE_NAME \
+--consumption-plan-location $LOCATION \
+--storage-account $STORAGE_NAME \
+--runtime node
+```
+
+Here we create the Azure Function app and link it to the resources we created earlier. One important parameter is the `--runtime` parameter which define what kind of runtime you want. This can be one of the following: dotnet, java, node, powershell or python.
+
+In this example we want our function to run nodejs (Javascript) code, so we specify `node`. For now, the default Azure Function version is 2, so in our example it will create a nodejs 10 runtime environment for us. You could specify a `--functions-version 3` and then `--runtime-version 12` to get the latest nodejs runtime environment. For simplicity we're going to stick with the defaults.
+
+> You can read more about the available options here: [az functionapp create](https://docs.microsoft.com/en-us/cli/azure/functionapp?view=azure-cli-latest#az-functionapp-create)
+
+When this command has finished, our Function app is up and running, but we havent deployed any code yet, so let's do that.
+
+### Bonus: Azure Function appsettings
+
+A Function app can have **Application settings** which are exposed as environmental variables in your code. Let's say you have one or more functions (within your Function app) that does something and then sends a notification to a Microsoft Teams channel. You can then expose the Teams Webhook Uri as environmental variables instead of hardcoding them in each function. This makes it very easy to send the notification to another channel.
+
+```bash
+az functionapp config appsettings set \
+--name $FUNCTION_NAME \
+--resource-group $RESOURCE_NAME \
+--settings TEAMS_URL=<YOUR_TEAMS_WEBHOOK_URL>
+```
+
+The command needs the name of the Function app and the resource group that the Function app is in, then you can specify the settings with `<VARIABLE_NAME>=<VALUE>` format. If you want to specify multiple settings you can separate them with spaces. If the setting already exists, it will replace it's value. **NOTICE: That there's no space between the 'variable_name' and the 'value'**.
+
+## Install Azure Functions Core Tools and create a new local function
+
+Now that we have set everything up in Azure, we're ready to start developing our functions, but first we need to install **Azure Functions Core Tools** which will help us to generate a function template and publish it to Azure when we're done.
+
+### Installing Azure Functions Core Tools
+
+Azure Functions Core Tools is a npm package, so we need to install it through npm.
+
+```bash
+npm install -g azure-functions-core-tools
+```
+
+This will install the **azure-functions-core-tools** package globally, which means that we dont have to install for every project. By default this will install the package for Function app version 2. If your Function app version is 3, you must specify the version, like so:
+
+```bash
+npm install -g azure-functions-core-tools@3
+```
+
+### Create our Function app project
+
+When the azure-functions-core-tools package is done installing, we can generate a new project like this. This will create a _azurefn-test-project_ folder in your current directory with a set of standard files.
+
+```bash
+func init azurefn-test-project
+```
+
+You will be prompted for a runtime. In this example choose `node` and `javascript`. When done, `cd` into your project folder.
+
+```bash
+cd azurefn-test-project
+```
+
+Now our project is setup, but we haven't yet created any functions.
+
+### Create our function inside the project
+
+```bash
+func new --name HelloFromOutside
+```
+
+This command will create our function. You will be prompted for which template you want to choose. In this example choose "Http Trigger", which will give us a template for a function that runs when its requested by HTTP. If you e.g want to do something scheduled you can choose "Timer Trigger".
+
+> You can create multiple functions by invoking `func new` and give it a name. Every function you create inside this project will be published to our Function app in Azure.
+
+Our project structure should now look like this.
+
+```plaintext
+dotpwsh@demo:~$ tree .
+.
+├ HelloFromOutside
+│ ├ function.json
+│ └ index.js
+├ host.json
+├ local.settings.json
+└ package.json
+
+1 directory, 5 files
+```
+
+Our function itself resides in the _HelloFromOutside_ folder and the code that runs is in the _index.js_ file.
+
+The function generated from the `func new` command will look for a name parameter either in the queryString or the body of the request and send back "Hello \<name\>". We're not going to build out a fancy function, since it's not the scope of this article, but you can do all sorts of things here. This will do for now.
+
+### Testing our function
+
+To test your functions locally you can run the `func start` command to spin up a local server which will give you an URI that you can hit through HTTP (e.g your browser, curl, postman or whatever).
+
+## Publish function to Azure Function
+
+When you're done writing and testing your function it's time to publish it to Azure Function so that it's available from the outside. This is a oneline command that will do everything for you, just make sure you are logged in with `az login`.
+
+```bash
+func azure functionapp publish $FUNCTION_NAME
+```
+
+This command might take some time to finish. Essentially this command will send your code to Azure, build it, and deploy it to your Function app.
+
+> You can read more about the `func azure functionapp publish` options here [Publish to Azure](https://docs.microsoft.com/en-us/azure/azure-functions/functions-run-local?tabs=windows#publish)
+
+When it's finished publishing you will get, among other things, a "Invoke url" back. Copy this, and verify that your function is working. **NOTE: The code in the URI will differ from function to function**.
+
+```bash
+dotpwsh@demo:~$ curl https://48aDF9HYfiWY-test-af.azurewebsites.net/api/HelloFromOutside?code=HmPfMj4ImBd7VwrL31IO47OZlLataj2A6LMw6mkQA4sbJSLQiEDtNm==&name=dotpwsh
+
+Hello dotpwsh
+```
+
+## Wrapping up
+
+In this article we've gone over how to create an Azure Function in Azure through the Azure CLI. We have created a Function app locally, tested it and published it to Azure. All of this is fairly straight forward and can easily be automated. The next step would be to create a repository in Azure DevOps and then automatically build and publish it to your Function app on commit to master branch or some other trigger. That will be for another article.
+
+Have fun creating your Azure Function apps. The sky is the limit for what this can use this for!
diff --git a/content/posts/getting_started_with_powershell_remoting_on_linux.md b/content/posts/getting_started_with_powershell_remoting_on_linux.md
new file mode 100644
index 0000000..3bf1145
--- /dev/null
+++ b/content/posts/getting_started_with_powershell_remoting_on_linux.md
@@ -0,0 +1,6 @@
+---
+title: "Getting_started_with_powershell_remoting_on_linux"
+date: 2022-09-06T07:52:46+02:00
+draft: true
+---
+
diff --git a/content/posts/golang-format-date-and-time.md b/content/posts/golang-format-date-and-time.md
new file mode 100644
index 0000000..44a4156
--- /dev/null
+++ b/content/posts/golang-format-date-and-time.md
@@ -0,0 +1,76 @@
+---
+title: 'Golang: Format Date and Time'
+description: "Most programming languages use the same layout (dd-mm-yyyy) to format date and time, but Go decided to go a different route. Below is a little cheat sheet of how to format date and time in Go."
+date: 2021-08-13T00:00:00+01:00
+tags: ['go', 'golang']
+draft: false
+---
+
+Most programming languages use the same layout (dd-mm-yyyy) to format date and time, but Go decided to go a different route. Below is a little cheat sheet of how to format date and time in Go.
+
+<!--more-->
+
+## Examples
+
+### Parsing exisiting date
+
+```go
+var (
+ timeToParse = "2021-09-13T07:43:52.823"
+ layout = "2006-01-02T03:04:05.999"
+)
+
+toTime, _ := time.Parse(layout, timeToParse)
+
+fmt.Printf("(%T): %s\n", toTime, toTime)
+
+// output: (time.Time): 2021-09-13 07:43:52.823 +0000 UTC
+```
+
+### Formatting date
+
+```go
+now := time.Now()
+fmt.Println("Default:", now)
+fmt.Println("Formatted:", now.Format("02-01-2006 15:04:05 -0700 MST"))
+
+// output: Default: 2021-08-13 09:01:29.233757 +0200 CEST m=+0.000065018
+// output: Formatted: 13-08-2021 09:01:29 +0200 CEST
+```
+
+## Options
+
+| Type | Options |
+| :------- | :---------------------------- |
+| Year | 06 2006 |
+| Month | 01 1 Jan January |
+| Day | 02 2 \_2 |
+| Weekday | Mon Monday |
+| Hours | 03 3 15 |
+| Minutes | 04 4 |
+| Seconds | 05 5 |
+| ms μs ns | .000 .000000 .000000000 |
+| ms μs ns | .999 .999999 .999999999 |
+| am / pm | PM pm |
+| Timezone | MST |
+| Offset | -0700 -07 -07:00 Z0700 Z07:00 |
+
+## Predefined layouts
+
+| Name | Layout |
+| :---------- | :----------------------------------------------------------- |
+| ANSIC | Mon Jan \_2 15:04:05 2006 |
+| UnixDate | Mon Jan \_2 15:04:05 MST 2006 |
+| RubyDate | Mon Jan 02 15:04:05 -0700 2006 |
+| RFC822 | 02 Jan 06 15:04 MST |
+| RFC822Z | 02 Jan 06 15:04 -0700 // RFC822 with numeric zone |
+| RFC850 | Monday, 02-Jan-06 15:04:05 MST |
+| RFC1123 | Mon, 02 Jan 2006 15:04:05 MST |
+| RFC1123Z | Mon, 02 Jan 2006 15:04:05 -0700 // RFC1123 with numeric zone |
+| RFC3339 | 2006-01-02T15:04:05Z07:00 |
+| RFC3339Nano | 2006-01-02T15:04:05.999999999Z07:00 |
+| Kitchen | 3:04PM |
+| Stamp | Jan \_2 15:04:05 |
+| StampMilli | Jan \_2 15:04:05.000 |
+| StampMicro | Jan \_2 15:04:05.000000 |
+| StampNano | Jan \_2 15:04:05.000000000 |
diff --git a/content/posts/golang-generate-random-numbers.md b/content/posts/golang-generate-random-numbers.md
new file mode 100644
index 0000000..88ab942
--- /dev/null
+++ b/content/posts/golang-generate-random-numbers.md
@@ -0,0 +1,53 @@
+---
+title: 'Golang: Generate Random Numbers'
+description: "Here's how to generate pseudorandom numbers in Go between to values. NOTE: You should always seed your random generator, or else it will produce the same result every time."
+date: 2021-08-13T11:18:04+02:00
+tags: ['go', 'golang']
+draft: false
+---
+
+Here's how to generate pseudorandom numbers in Go between to values. NOTE: You should always seed your random generator, or else it will produce the same result every time. Include this snippet at the top of your main func.
+
+<!--more-->
+
+```golang
+func main() {
+ rand.Seed(time.Now().UnixNano())
+
+ // ...
+}
+```
+
+## Generate a random int between two values
+
+This example returns a random integer between min and max, where min is inclusive and max is exclusive.
+
+```golang
+func getRandomInt(min, max int) int {
+ return rand.Intn(max-min) + min
+}
+
+// example
+for i := 0; i < 5; i += 1 {
+ fmt.Printf("%d ", getRandomInt(5, 10))
+}
+
+// output: 8 9 7 6 9
+```
+
+## Generate a random int between two values (inclusive)
+
+This example returns a random integer between min and max, where both min and max is inclusive.
+
+```golang
+func getRandomIntInclusive(min, max int) int {
+ return rand.Intn((max - min + 1)) + min
+}
+
+// example
+for i := 1; i <= 5; i += 1 {
+ fmt.Printf("%d ", getRandomIntInclusive(5, 10))
+}
+
+// output: 10 5 6 9 9
+```
diff --git a/content/posts/handling-request-and-response-in-servicenow-scripted-rest-api.md b/content/posts/handling-request-and-response-in-servicenow-scripted-rest-api.md
new file mode 100644
index 0000000..52ea26f
--- /dev/null
+++ b/content/posts/handling-request-and-response-in-servicenow-scripted-rest-api.md
@@ -0,0 +1,164 @@
+---
+title: "Handling Request and Response in Servicenow Scripted REST API"
+description: "How to handle request and response in ServiceNow Scripted REST API"
+date: 2021-10-28T16:00:51+02:00
+tags: ['servicenow', 'javascript', 'api', 'scripted-rest-api']
+draft: true
+---
+
+## Request
+
+```javascript
+(function process( /*RESTAPIRequest*/ request, /*RESTAPIResponse*/ response) {
+
+ // Get the request body
+ var requestBody = request.body.data;
+
+ // Get the request headers
+ var requestHeaders = request.headers;
+
+ // Get specific header
+ var acceptHeader = request.getHeader('accept');
+
+ // Get query parameters
+ var requestQueryParams = request.queryParams;
+
+ // Get path parameters
+ var requestPathParams = request.pathParams;
+
+})(request, response);
+```
+
+### Validating JSON properties
+
+Since we often cannot control what the client sends us, validating the JSON request body is very handy. This way we can tell the client that the request they sent us is not what we were expecting and its good first step to eliminate bugs.
+
+```javascript
+// Simple function to validate data against a set of rules in a flat JSON object
+function validateRequest(data, rules) {
+ var dataKeys = Object.keys(data).sort();
+ var rulesKeys = Object.keys(rules).sort();
+
+ if (dataKeys.toString() !== rulesKeys.toString()) {
+ return false;
+ }
+
+ for(var i = 0; i < rulesKeys.length; i += 1) {
+ var key = rulesKeys[i];
+
+ if (typeof data[key] === "undefined" || typeof data[key] !== rules[key]) {
+ return false;
+ }
+ }
+
+ return true;
+}
+
+var rules = {
+ name: "string",
+ value: "string",
+ active: "boolean"
+};
+
+if (!validateRequest(request.body.data, rules)) {
+ var err = new sn_ws_err.SericeError();
+ err.setStatus(400);
+ err.setMessage('Invalid request body');
+ err.setDetail('Request body is not in expected format');
+
+ reponse.setError(err);
+ return;
+}
+```
+
+## Response
+
+When creating a REST API it's important how you handle your responses, because thats the only way your client will know what has happend.
+
+When everything goes well, we want to sent a response status in the 2XX range. Most commonly is 200 and 201. You also want to let your clients know which format the response is in. Most common these days are `application/json`.
+
+```javascript
+// 201 Created
+response.setContentType('application/json');
+response.setStatus(201);
+response.setBody({
+ message: "Your object was created",
+ object: {
+ name: "My object",
+ value: "some value",
+ }
+});
+
+// 200 OK
+response.setContentType('application/json');
+response.setStatus(200);
+response.setBody({
+ name: "My object",
+ value: "some value",
+});
+```
+
+There is alot that can go wrong when handling data that is sent by clients, and its a good practice to give good error responses so the clients known what went wrong. ServiceNow has some built in error types show below.
+
+```javascript
+// BadRequest error - 400
+response.setError(new sn_ws_err.BadRequestError('Request was malformatted'));
+
+// NotFound error - 404
+response.setError(new sn_ws_err.NotFoundError('Object was not found'));
+
+// NotAcceptable error - 406
+response.setError(new sn_ws_err.NotAcceptableError('Accept header is not a supported type'));
+
+// ConflictError - 409
+response.setError(new sn_ws_err.ConflictError('There was a conflict'));
+
+// UnsupportedMediaType error - 415
+response.setError(new sn_ws_err.UnsupportedMediaTypeError('The requested media type is not supported'));
+```
+
+You can also create custom errors.
+
+```javascript
+// Custom error
+var err = new sn_ws_err.ServiceError();
+err.setStatus(500);
+err.setMessage('Internal Server Error');
+err.setDetail('Something went wrong with the request');
+
+response.setError(err);
+```
+By using the response object's error handler, you make sure that error are reported consitently. It will follow the standard ServiceNow error format which looks like this.
+
+```json
+{
+ "error": {
+ "detail": "",
+ "message": ""
+ },
+ "status": "failure"
+}
+```
+
+## Example
+
+```javascript
+(function process( /*RESTAPIRequest*/ request, /*RESTAPIResponse*/ response) {
+ // Check if correct content type
+
+ // Read request body
+
+ // Do something
+
+ // Handle errors
+
+ // Respond
+})(request, response);
+```
+
+## Resources
+
+[https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/sn_ws-namespace/c_RESTAPIRequest](https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/sn_ws-namespace/c_RESTAPIRequest)
+[https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/sn_ws-namespace/c_RESTAPIRequestBody](https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/sn_ws-namespace/c_RESTAPIRequestBody)
+[https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/sn_ws-namespace/c_RESTAPIResponse](https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/sn_ws-namespace/c_RESTAPIResponse)
+[https://developer.servicenow.com/dev.do#!/learn/courses/quebec/app_store_learnv2_rest_quebec_rest_integrations/app_store_learnv2_rest_quebec_scripted_rest_apis/app_store_learnv2_rest_quebec_scripted_rest_api_error_objects](https://developer.servicenow.com/dev.do#!/learn/courses/quebec/app_store_learnv2_rest_quebec_rest_integrations/app_store_learnv2_rest_quebec_scripted_rest_apis/app_store_learnv2_rest_quebec_scripted_rest_api_error_objects)
diff --git a/content/posts/improving-powershell-profile.md b/content/posts/improving-powershell-profile.md
new file mode 100644
index 0000000..79aa8f9
--- /dev/null
+++ b/content/posts/improving-powershell-profile.md
@@ -0,0 +1,94 @@
+---
+title: "Improving Powershell Profile"
+description: "For years I've been a fan of the linux bash, with easy support for ssh-keys, colorized directory listings and git info the prompt. But at the same time, I really love Powershell. I have finally found some usefull Powershell modules that has made me switch completly to Powershell in the terminal."
+tags: ["powershell"]
+date: 2020-06-18T00:00:00+01:00
+draft: false
+---
+
+For years I've been a fan of the linux bash, with easy support for ssh-keys, colorized directory listings and git info the prompt. But at the same time, I really love Powershell. I have finally found some usefull Powershell modules that has made me switch completly to Powershell in the terminal.
+
+<!--more-->
+
+## Colorized Directory Listings
+
+The first module I'm going to introduce is the **Get-ChildItemColor** module by Joon Ro ([github.com/joonro/Get-ChildItemColor](https://github.com/joonro/Get-ChildItemColor)).
+
+This module will override the `Out-Default` cmdlet and give you colorized directory listings when using `Get-ChildItem` or `ls`.
+
+You can easilly install it from the [Powershell Gallery](https://www.powershellgallery.com/packages/Get-ChildItemColor).
+
+```powershell
+Install-Module -Name Get-ChildItemColor -Scope CurrentUser -AllowClobber
+```
+
+**NOTE:** The `-AllowClobber` flag is neccessary for it to override the `Out-Default` cmdlet.
+
+Now you can just add `Import-Module -Name Get-ChildItemColor` to your Powershell profile.
+
+## Git Information In Your Prompt
+
+The second module we're going to add is the `posh-git` module by Keith Dahlby ([github.com/dahlbyk/posh-git](https://github.com/dahlbyk/posh-git)). This will override your default prompt and add git information when in a folder with git initialized. **NOTE:** This will not override your custom prompt, if you have defined one in your Powershell profile.
+
+This module is also available from the [Powershell Gallery](https://www.powershellgallery.com/packages/posh-git). Currently, v1.0 is in beta, and is neccessary if you want support for Powershell Core 6.0 and up. Version v0.x only supports Windows Powershell.
+
+```powershell
+Install-Module -Name posh-git -Scope CurrentUser -AllowPrerelease -Force
+```
+
+To be able to install the v1.0-beta we must include the `-AllowPrerelease` flag.
+
+Next, just add it to your Powershell profile `Import-Module -Name posh-git`.
+
+## Using SSH Keys With Remote Git Repositories
+
+The last module is `posh-sshell` which is a helper module for your SSH client and used to be a part of the `posh-git` module. This has now been separated into it's own module by the same creator Keith Dahlby ([github.com/dahlbyk/posh-sshell](https://github.com/dahlbyk/posh-sshell)).
+
+As with the others, this is available from the [Powershell Gallery](https://www.powershellgallery.com/packages/posh-sshell).
+
+```powershell
+Install-Module -Name posh-sshell -Scope CurrentUser
+```
+
+There is one cmdlet in particular that we're interrested in, which is the `Start-SshAgent` cmdlet. This will start your SSH agent wether you're using the Windows-native OpenSSH client, OpenSSH client that ships with Git for Windows or putty's Pageant client.
+
+If you are using the Windows-native OpenSSH client, make sure that the service is not `disabled`.
+
+```powershell
+Get-Service -Name ssh-agent | Select-Object Status, Name, StartType
+```
+
+If `StartType` says `disabled` you can run the following command to enable it or else `Start-SshAgent` will fail.
+
+```powershell
+Get-Service -Name ssh-agent | Set-Service -StartType Manual
+```
+
+Next, add the following to your profile.
+
+```powershell
+Import-Module -Name posh-sshell
+Start-SshAgent -Quiet
+```
+
+When the ssh-agent is started it will look for ssh-keys in your `$env:USERPROFILE\.ssh` folder. If you add SSH keys after the ssh-agent has started you can either restart it with
+
+```powershell
+Stop-SshAgent
+Start-SshAgent
+```
+
+or add it to the ssh-agent with
+
+```powershell
+# Adds $env:USERPROFILE\.ssh\id_rsa to the SSH agent.
+Add-SshKey
+
+# OR
+# Adds $env:USERPROFILE\.ssh\mykey to the SSH agent.
+Add-SshKey ~\.ssh\mykey
+```
+
+## Conclusion
+
+Now, atleast in my opinion you have a more similar workflow in Powershell that you would have in linux. It's really cool to see how far Powershell (and Windows) has come in the field of developer workflow.
diff --git a/content/posts/initiate-config-trick-in-python.md b/content/posts/initiate-config-trick-in-python.md
new file mode 100644
index 0000000..4b5bacb
--- /dev/null
+++ b/content/posts/initiate-config-trick-in-python.md
@@ -0,0 +1,63 @@
+---
+title: 'TIL: Initiate Config class in Python'
+description: 'Today I Learned a neat trick to initiate a Config class from a JSON file in Python'
+date: '2024-11-27T14:36:00+02:00'
+tags: ['python']
+---
+
+I came across [anthonywritescode's](https://github.com/anthonywritescode) Github repository for his [twitch-chat-bot](https://github.com/anthonywritescode/twitch-chat-bot) which is written in Python. I have not written much Python in my life so maybe this is trivial, but I found it to be a neat little trick anyways.
+
+<!--more-->
+
+Consider you have the following JSON config file.
+
+```jsonc
+// file: config.json
+{
+ "username": "claw0ry",
+ "token": "636DDA4D-A395-43C5-A2B1-7A0401DE51AB"
+
+}
+```
+
+In the past I would probably have loaded it like this:
+
+```python
+import json
+
+class Config():
+ username: str
+ token: str
+
+ def validate_username(self) -> bool:
+ return True if len(self.username) > 0 else False
+
+with open('./config.json') as f:
+ d = json.load(f)
+ cfg = Config(username=d.username, token=d.token)
+
+print(cfg.validate_username())
+```
+
+What we essentially are doing is that we load the `config.json` into a variable as dictionary and then we initiate a new Config class and assign each class property by arguments. With this little trick we can make Python do it for us.
+
+```python
+import json
+from typing import NamedTuple
+
+class Config(NamedTuple):
+ username: str
+ token: str
+
+ def validate_username(self) -> bool:
+ return True if len(self.username) > 0 else False
+
+with open('./config.json') as f:
+ cfg = Config(**json.load(f))
+
+print(cfg.validate_username())
+```
+
+The first thing to notice is that our `Config` class now takes arguments in the form of a `NamedTuple`. Then we have combined the `json.load()` with `**` when initiating our Config class. The `json.load()` method will give us a Python dictionary from the `config.json` file. We then unpack the dictionary (with `**`) into named arguments for the Config class.
+
+It's not much, but a neat little shortcut and in my opinion it reads a little better too.
diff --git a/content/posts/interacting_with_azure_keyvault_in_go.md b/content/posts/interacting_with_azure_keyvault_in_go.md
new file mode 100644
index 0000000..fa3d6a5
--- /dev/null
+++ b/content/posts/interacting_with_azure_keyvault_in_go.md
@@ -0,0 +1,293 @@
+---
+title: 'Interacting with Azure Key Vault in Go'
+description: "Most times when working with API's there some kind of documentation on how to iteract with it. Working with Azure SDK for Go is a different story. There's almost no documentation (except the code itself)."
+tags: ["go", "golang", "azure", "keyvault"]
+date: 2021-08-20T23:18:32+02:00
+draft: false
+---
+
+Most times when working with API's there some kind of documentation on how to iteract with it. Working with Azure SDK for Go is a different story. There's almost no documentation (except the code itself). At my current job we use Azure a lot and a big part of that is Azure Key Vault. For my latest project I had to fetch some secrets from Key Vault to use in a CLI application, so I had to start digging into the source code to find how to interact with it.
+
+<!--more-->
+
+## 1. Authentication
+
+Almost any endpoint in the Azure API requires authentication, so let's start with that. Services in the Azure API, for the most part, use the `autorest/azure/auth` module for handling authentication, but for Key Vault it is a bit different. For Key Vaults we have two modules; one for managing Key Vaults, and one for working with the data.
+
+- Management: github.com/Azure/go-autorest/autorest/azure/auth
+- Data: github.com/Azure/azure-sdk-for-go/services/keyvault/auth
+
+This is very important, because if you type "auth.NewAuthorizerFromCLI" in your editor and have auto imports on, it will most likely use the autorest module, which will give you an error when working with the data inside Key Vault.
+
+Another important concept to know, is that the Azure SDK for Go use something called "Authorizer". As we'll see in this sample, we need to initiate an "authorizer" from one of the auth modules, and then pass that into the module for the specific service, Key Vault in this scenario.
+
+### Methods
+
+When authorizing with the Azure SDK, there are three methods to choose from:
+
+- NewAuthorizerFromCLI
+- NewAuthorizerFromEnvironment
+- NewAuthorizerFromFile
+
+#### NewAuthorizerFromCLI
+
+If you have `az cli` installed, you can authenticate using your current az user. To show your current logged in account you can run `az account show` or `az login` to login. This may be the easiest option.
+
+#### NewAuthorizerFromEnvironment
+
+This will allow you to authorize using environment variables. It will look for variables belonging to different authentication mechanism in this order:
+
+1. Client credentials
+2. Client certificate
+3. Username password
+4. MSI
+
+It will determine the method to use based on which of these environment variables are set:
+
+- AZURE_SUBSCRIPTION_ID
+- AZURE_TENANT_ID
+- AZURE_CLIENT_ID
+- AZURE_CLIENT_SECRET
+- AZURE_CERTIFICATE_PATH
+- AZURE_CERTIFICATE_PASSWORD
+- AZURE_USERNAME
+- AZURE_PASSWORD
+
+#### NewAuthorizerFromFile
+
+This method allows you to place credentials in a JSON file, and export an environment variable `AZURE_AUTH_LOCATION` that tells the Azure SDK where to look for the file. This file can either be created manually, or you can use the output from `az cli` when creating a new service principal. For example:
+
+```bash
+moiaune@box:~$ az ad sp create-for-rbac --sdk-auth > azureauth.json
+moiaune@box:~$ cat azureauth.json
+{
+ "clientId": "b52dd125-9272-4b21-9862-0be667bdf6dc",
+ "clientSecret": "ebc6e170-72b2-4b6f-9de2-99410964d2d0",
+ "subscriptionId": "ffa52f27-be12-4cad-b1ea-c2c241b6cceb",
+ "tenantId": "72f988bf-86f1-41af-91ab-2d7cd011db47",
+ "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
+ "resourceManagerEndpointUrl": "https://management.azure.com/",
+ "activeDirectoryGraphResourceId": "https://graph.windows.net/",
+ "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
+ "galleryEndpointUrl": "https://gallery.azure.com/",
+ "managementEndpointUrl": "https://management.core.windows.net/"
+}
+moiaune@box:~$ export AZURE_AUTH_LOCATION=/home/moiaune/azureauth.json
+```
+
+> NOTE: REMEMBER TO STORE THE FILE IN A SECURE LOCATION
+
+### Example
+
+Now, let's see this in action. I'm going to use the `NewAuthorizerFromCLI` method because its the simplest. First I need to make sure that I'm logged in to the correct account and subscription. So I run `az login` and a website will poup in my browser, telling me to log in. When thats done, you can run `az account show` to make sure that you are logged in with the correct user and subscription.
+
+```go
+package main
+
+import (
+ "github.com/Azure/azure-sdk-for-go/services/keyvault/auth"
+ "github.com/Azure/azure-sdk-for-go/services/keyvault/v7.1/keyvault"
+)
+
+var (
+ // for simplicity we make our client global
+ client keyvault.BaseClient
+)
+
+func main() {
+ // we initiate our Key Vault client
+ client = keyvault.New()
+
+ // then we initiate our authorizer
+ authorizer, err := auth.NewAuthorizerFromCLI()
+
+ // and tell our client to authenticate using that
+ client.Authorizer = authorizer
+}
+```
+
+This is literally it. As long as you are successfully logged into `az cli`, you are now ready to work with data in Key Vault using Go.
+
+## 2. Fetching a Key Vault secret
+
+Now that we have learned how to authenticate, let's try to do something usefull, like fetching a secret from Azure Key Vault.
+
+In the Azure SDK, if you want to get a specific secret, you must provide it with a version. In most scenarios we want the latest version, so lets first write a function that will list all versions and give us the latest one. We will then write another function that fetch a secret based on latest version. We will continue to build on the code above.
+
+```go
+// ...
+
+const (
+ // name of our Key Vault
+ vaultName = "example-vault-01"
+
+ // this will build the BaseURI for our Key Vault
+ vaultBaseURI = fmt.Sprintf("https://%s.%s", vaultName, azure.PublicCloud.KeyVaultDNSSuffix)
+)
+
+func getLatestVersion(secretName string) (string, error) {
+ // let's fetch all versions
+ list, err := client.GetSecretVersionsComplete(context.Background(), vaultBaseURI, secretName, nil)
+ if err != nil {
+ return "", err
+ }
+
+ var lastDate time.Time
+ var lastVersion string
+
+ // loop through all versions
+ for list.NotDone() {
+
+ v := list.Value()
+
+ // make sure to only check for secrets that are enabled
+ if *v.Attributes.Enabled {
+ updated := time.Time(*v.Attributes.Updated)
+
+ // if lastDate is not set, or current version is newer than lastDate;
+ // update lastDate
+ if lastDate.IsZero() || updated.After(lastDate) {
+ lastDate = updated
+ }
+
+ // split the ID on '/' and get the last part which is the version hash
+ parts := strings.Split(*v.ID, "/")
+ lastVersion = parts[len(parts)-1]
+ }
+
+ list.Next()
+ }
+
+ return lastVersion, nil
+}
+```
+
+Essentially what this code does is get all versions for a spesific secret, then loop through them to find the newest one that is also enabled. Split the ID field on the '/' and get the last part which is the version hash.
+
+Now that we have a method to get the newest version hash, we can build our function for fetching the secret itself. We continue to build on our code from above.
+
+```go
+// ...
+
+func getSecret(secretName string) (string, error) (
+ // get latest version for our secret
+ latestVersion, err := getLatestVersion(secretName)
+ if err != nil {
+ return "", err
+ }
+
+ // get secret itself
+ secret, err := client.GetSecret(context.Background(), vaultBaseURI, secretName, latestVersion)
+ if err != nil {
+ return "", err
+ }
+
+ // only return the value a.k.a THE secret
+ return *secret.Value, nil
+}
+```
+
+First we get our latest version using our `getSecretVersion()` function, then we get the secret. The `GetSecret()` function returns a `SecretBundle` which contains some meta-data and other stuff, but in this example we're only interrested in the `Value` which is the actual secret.
+
+If we put it all together it will look like this.
+
+```go
+package main
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "time"
+
+ "github.com/Azure/azure-sdk-for-go/services/keyvault/auth"
+ "github.com/Azure/azure-sdk-for-go/services/keyvault/v7.1/keyvault"
+ "github.com/Azure/go-autorest/autorest/azure"
+)
+
+const (
+ vaultName = "example-vault-01"
+ vaultBaseURI = fmt.Sprintf("https://%s.%s", vaultName, azure.PublicCloud.KeyVaultDNSSuffix)
+)
+
+var (
+ client keyvault.BaseClient
+)
+
+func main() {
+ client = keyvault.New()
+
+ authorizer, err := auth.NewAuthorizerFromCLI()
+
+ client.Authorizer = authorizer
+
+ secretValue, err := getSecret("example-secret")
+ if err != nil {
+ fmt.Println("An error occured:", err)
+ return
+ }
+
+ fmt.Println("Secret Value:", secretValue)
+}
+
+func getLatestVersion(secretName string) (string, error) {
+ list, err := client.GetSecretVersionsComplete(context.Background(), vaultBaseURI, secretName, nil)
+ if err != nil {
+ return "", err
+ }
+
+ var lastDate time.Time
+ var lastVersion string
+
+ for list.NotDone() {
+
+ v := list.Value()
+
+ if *v.Attributes.Enabled {
+ updated := time.Time(*v.Attributes.Updated)
+
+ if lastDate.IsZero() || updated.After(lastDate) {
+ lastDate = updated
+ }
+
+ parts := strings.Split(*v.ID, "/")
+ lastVersion = parts[len(parts)-1]
+ }
+
+ list.Next()
+ }
+
+ return lastVersion, nil
+}
+
+func getSecret(secretName string) (string, error) (
+ latestVersion, err := getLatestVersion(secretName)
+ if err != nil {
+ return "", err
+ }
+
+ secret, err := client.GetSecret(context.Background(), vaultBaseURI, secretName, latestVersion)
+ if err != nil {
+ return "", err
+ }
+
+ return *secret.Value, nil
+}
+```
+
+This should output:
+
+```bash
+// Output
+Secret Value: hunter2
+```
+
+## Wrapping up
+
+So now you know how to fetch secrets from Azure Key Vault using Go. If you want to interact with other services in the Azure SDK, the process is pretty much the same.
+
+1. Create a client from the service
+2. Initiate an authorizer
+3. Set the client to use the authorizer
+
+The SDK is pretty well written and easy to understand when you just grasp the process. So it's not that difficult to dive into the source code to find answers. Tho, I still prefer proper documentation.
diff --git a/content/posts/linux_server_hardning.md b/content/posts/linux_server_hardning.md
new file mode 100644
index 0000000..65bb019
--- /dev/null
+++ b/content/posts/linux_server_hardning.md
@@ -0,0 +1,144 @@
+---
+title: 'Debian 12 Server Hardening'
+description: 'Basic hardning of a new Debian 12 server'
+date: '2024-07-10T13:03:56+02:00'
+tags: ['debian', 'security', 'linux']
+toc: true
+url: linux-server-hardning
+---
+
+We take a look at some of the initial configuration you should do when spinning up a new Debian 12 server that is connected to the internet to harden it against common attacks. The list of actions is in no way exhaustive. Depending on what you are hosting there are further actions to take.
+
+<!--more-->
+
+## 1. Update and upgrade system packages
+
+Make sure that your installed packages are up-to-date to reduce the risk of having vulnerable packages installed.
+
+```console
+apt update && apt upgrade -y
+```
+
+## 2. Create a dedicated non-root user
+
+When you spin up a server you are provided with a root user that has access to everything. It's good practice to not use this for everyday tasks. Instead you create your own user and give it necessary permissions.
+
+```console
+useradd --create-home --shell /usr/bin/bash --groups sudo <user_name>
+```
+
+## 3. Setup SSH Keys
+
+By default your are logging into SSH with passwords. It's recommended to use ssh keys instead. An attacker would then need to steel your private key in stead of just guessing a password. First switch to your newly created user `su <user_name>`.
+
+```console
+mkdir -p ~/.ssh
+touch ~/.ssh/authorized_keys
+chmod -R 0700 ~/.ssh
+chmod 0600 ~/.ssh/authorized_keys
+```
+
+On your local machine (that you use to connect to the server with) you must generate a ssh keypair.
+
+```console
+ssh-keygen -b 521 -t ecdsa -f ~/.ssh/my_debian
+```
+
+This will create a private and public key in your `~/.ssh` folder with the name you specified.
+
+```console
+claw0ry@lnx:~$ ls ~/.ssh
+my_debian my_debian.pub
+```
+
+Next you must copy the contents of `my_debian.pub` and paste it into the `.ssh/authorized_keys` file on the server.
+
+
+## 4. Disable SSH password auth and root login
+
+Now that we have setup our ssh keys, we can go ahead and disable password login. We also want to disable direct login to `root`. If we need root we can login with our dedicated user and then become root, since we added ourselves to the `sudo` group.
+
+```console
+sudo vim /etc/ssh/sshd_config
+```
+
+Find the line where it says `PermitRootLogin` and change this to `no`. Next find `PasswordAuthentication` and also change this to `no`.
+
+Restart **sshd**.
+
+```console
+sudo systemctl restart sshd
+```
+
+If everything was setup correctly you should be able to logout of the server and log back in without a password (only using your ssh key). You can also test it by moving your ssh keys out of the `~/.ssh` folder and try to connect to the server. It should tell you that it only accepts keypair.
+
+## 5. Setup firewall with UFW
+
+**ufw** stands for "uncomplicated firewall" and its pretty darn true.
+
+**NOTE:** Make sure you allow ssh before enabling and starting ufw. Otherwise you might look yourself out for good!
+
+```console
+sudo apt install ufw
+sudo ufw allow ssh
+sudo ufw enable
+sudo systemctl start ufw
+```
+
+You can see which rules are in play with `sudo ufw status`.
+
+## 6. Setup fail2ban for SSH auth
+
+If you have every deployed a server with internet access and looked at the logs you know that it will get hammered with suspicious login attemps. We can use fail2ban in conjuction with ufw to block such attempts. After x failed login attempts fail2ban will put the IP address in a timeout blocking list.
+
+```console
+sudo apt install fail2ban
+```
+
+Originally ssh authentication logs were stored in `/var/logs/auth.log`, but in Debian 12 these are now collected under the journalctl system. By default fail2ban will look for `/var/logs/auth.log`, so we need to tell it to use journalctl(systemctl) in stead.
+
+```console
+touch /etc/fail2ban/paths-debian.local
+echo "[DEFAULT]" > /etc/fail2ban/paths-debian.local
+echo "sshd_backend = systemd" >> /etc/fail2ban/paths-debian.local
+```
+
+After we edited the configuration we must restart fail2ban.
+
+```console
+sudo systemctl restart fail2ban
+```
+
+Here are some other usefull fail2ban commands.
+
+```console
+# see the overall status of jail <sshd>
+sudo fail2ban-client status sshd
+
+# get a list of currently banned ip
+sudo fail2ban-client banned
+
+# get a list of currently banned ip's for <sshd> jail
+sudo fail2ban-client get sshd banip
+
+# see ssh auth logs
+sudo journalctl -u ssh
+```
+## 7. Get a list of connections and port on your system
+
+You can see which ports and connections that are in use. On a fresh server install with sshd, this is a fairly normal state.
+
+```console
+claw0ry@localhost:~$ sudo ss -tulpn
+Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
+udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=568,fd=5))
+udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=568,fd=6))
+tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=613,fd=3))
+tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=613,fd=4))
+```
+
+- `-t`: services listening on TCP
+- `-u`: services listening on UDP
+- `-l`: services listening
+- `-p`: listening process
+- `-n`: use portnumber in stead of name
diff --git a/content/posts/make-git-work-with-multiple-accounts.md b/content/posts/make-git-work-with-multiple-accounts.md
new file mode 100644
index 0000000..50ad924
--- /dev/null
+++ b/content/posts/make-git-work-with-multiple-accounts.md
@@ -0,0 +1,116 @@
+---
+title: 'Make git work with multiple accounts'
+date: 2022-08-11T13:38:35+02:00
+draft: false
+---
+
+In this article we're going to look at how you can setup git to work with multiple Github accounts and SSH.
+
+<!--more-->
+
+### Generate SSH keys
+
+Github only allows you to use the same SSH key for one account, therefor if you have multiple accounts (e.g personal and work) you must generate two different SSH key pairs.
+
+```bash
+ssh-keygen -f ~/.ssh/gh_personal -t rsa -b 4096
+ssh-keygen -f ~/.ssh/gh_work -t rsa -b 4096
+```
+
+### Folderstructure
+
+My personal preference is to use dedicated folder pr Github account. So my folderstructure looks something like this:
+
+```
+~/code
+└── github.com
+ ├── personal
+ └── work
+```
+
+Based on this, we can put a `.gitconfig` in each of folders, so it becomes this:
+
+```
+~/code
+└── github.com
+ ├── personal
+ │   └── .gitconfig
+ └── work
+ └── .gitconfig
+```
+
+We're going to edit these files soon to add account specific configurations.
+
+### Conditional configuration includes
+
+git version 2.13 and newer supports [conditional includes](https://git-scm.com/docs/git-config#_includes), which means that we can include different `.gitconfigs` based on a condition.
+
+We're going to use the `gitdir` keyword to include a specific `.gitconfig` based on where our git project is located.
+
+In our main `.gitconfig` (usually under `~/.gitconfig`) we need to add these conditionals at the top.
+
+```
+[includeIf "gitdir:~/code/github.com/personal/"]
+ path = ~/code/github.com/personal/.gitconfig
+[includeIf "gitdir:~/code/github.com/work/"]
+ path = ~/code/github.com/work/.gitconfig
+
+...
+```
+
+**NOTE:** The trailing slash of folder path is neccessary or else it wont work
+
+### Tell git which ssh-key to use
+
+Now that we have set our conditionals we can edit each of the `.gitconfig` files to tell git which ssh-key to use, and set other account specific configs like name and email.
+
+**~/code/github.com/personal/.gitconfig**
+
+```
+[user]
+ name = "John Doe"
+ email = "johndoe@example.com"
+[core]
+ sshCommand = "ssh -i ~/.ssh/gh_personal
+```
+
+**~/code/github.com/work/.gitconfig**
+
+```
+[user]
+ name = "John Doe"
+ email = "johndoe@company.com"
+[core]
+ sshCommand = "ssh -i ~/.ssh/gh_work
+```
+
+Now whenever you interact with git in a project that is located under `~/code/github.com/work` it will use the `.gitconfig` for work and associated ssh key, and vice-versa for your personal projects.
+
+### Additional providers
+
+This setup is not specific to Github, so if you for example also have a Bitbucket (or any other git provider that supports ssh) account which uses a different SSH key you would use the same technique.
+
+```
+ssh-keygen -f ~/.ssh/bitbucket -t rsa -b 4096
+```
+
+```
+# file: ~/.gitconfig
+
+...
+
+[includeIf "gitdir:~/code/bitbucket/"]
+ path = ~/code/bitbucket/.gitconfig
+
+...
+```
+
+```
+# file: ~/code/bitbucket/.gitconfig
+
+[user]
+ name = "John Doe"
+ email = "johndoe@company.com"
+[core]
+ sshCommand = "ssh -i ~/.ssh/bitbucket"
+```
diff --git a/content/posts/monitor-azure-keyvault-for-expiring-secrets-and-certificates.md b/content/posts/monitor-azure-keyvault-for-expiring-secrets-and-certificates.md
new file mode 100644
index 0000000..fa5b700
--- /dev/null
+++ b/content/posts/monitor-azure-keyvault-for-expiring-secrets-and-certificates.md
@@ -0,0 +1,13 @@
+---
+title: "Monitor Azure Keyvault for Expiring Secrets and Certificates"
+date: 2023-06-28T14:52:47+02:00
+draft: true
+---
+
+Golang has in the recent years become a popular language for creating services and tooling. Partly because of its ease of use, but also its speed and easy deployment.
+
+In this article we're going to take a look at how we can create a little service to monitor Azure Key Vaults for expiring secrets and certificates.
+
+NOTE: Check out Azure Key Vault Events
+
+https://portal.azure.com/#@ncopmgmt.onmicrosoft.com/resource/subscriptions/b5bff884-cef0-423c-8287-ea5f8d8bf0ac/resourceGroups/ncopmgmt-kv-rg-mpoint/providers/Microsoft.KeyVault/vaults/ncopmgmt-kv-mpoint/events
diff --git a/content/posts/powershell-extract-windows-spotlight-images.md b/content/posts/powershell-extract-windows-spotlight-images.md
new file mode 100644
index 0000000..8b40e6f
--- /dev/null
+++ b/content/posts/powershell-extract-windows-spotlight-images.md
@@ -0,0 +1,73 @@
+---
+title: "Powershell Extract Windows Spotlight Images"
+description: "A very nice feature of Windows 10 is Windows Spotlight who serves beautiful wallpapers on your lock screen every day. It’s a shame these beautiful images are hidden in a system folder somewhere in Windows, so today I’m going to show you how you can extract these images with Powershell."
+tags: ["powershell", "windows"]
+date: 2018-12-12T00:00:00+01:00
+draft: false
+---
+
+A very nice feature of Windows 10 is Windows Spotlight who serves beautiful wallpapers on your lock screen every day. It’s a shame these beautiful images are hidden in a system folder somewhere in Windows, so today I’m going to show you how you can extract these images with Powershell. You could perfectly do this manually, but since these images change periodically (haven’t found any info on when) its much easier to just run a script. Personally I run this script as a scheduled job everyday.
+
+<!--more-->
+
+So here’s the script:
+
+```powershell
+<# Filename: Get-WindowsSpotlightImages.ps1 #>
+
+Param (
+ [String] $OutputFolder = "$env:USERPROFILE\Pictures\Spotlight"
+)
+
+Add-Type -AssemblyName System.Drawing
+
+If(-not (Test-Path $OutputFolder)) {
+ New-Item $OutputFolder -ItemType Directory | Out-Null
+}
+
+Get-ChildItem "$env:USERPROFILE\AppData\Local\Packages\Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy\LocalState\Assets" | ForEach-Object {
+
+ $NewFilename = $_.Name + ".jpeg"
+ If(-not (Test-Path (Join-Path $OutputFolder $NewFilename))) {
+ Copy-Item $_.Fullname (Join-Path $OutputFolder $NewFilename)
+ }
+
+}
+
+$ImagesToRemove = @()
+$AllImages = Get-ChildItem (Join-Path $OutputFolder "*.jpeg")
+$AllImages | ForEach-Object {
+ $image = New-Object System.Drawing.Bitmap $_.FullName
+ If(-not ($image.Width -ge 1920)) {
+ $ImagesToRemove += $_.FullName
+ } else {
+ }
+ $image.Dispose()
+}
+
+$ImagesToRemove | Remove-Item
+```
+
+Now lets break it down.
+
+**Line 3-5:** We set the defualt output folder to a spotlight folder in the user’s picture folder.
+
+**Line 7:** We must import the assembly System.Drawing to be able to read the image size.
+
+**Line 9-11:** We create the output folder if it does not exist.
+
+**Line 13-20:** So here you can see the path where Windows Spotlight images are stored. It contains images for both desktop screens and mobile screens. They are not stored with a file extension, so we give them a _.jpeg_ extension so that we can handle them later. Then we copy them to the output folder with the new extension.
+
+**Line 22-31:** We loop through all images in the output folder matching extension _.jpeg._ Then we look for images that are ment for desktop screens (width larger than 1920px) since we dont want the mobile ones.
+
+> NOTE: If you would like to have images ment for both desktop and mobile, you could just remove everything from line 21 and down and line 7.
+
+The filename of images that do not meet our requirment of `width=1920px` will be added to an array `$ImagesToRemove`. It’s important that we dispose the image object when we’re done reading from it, or else you will get a lot of errors.
+
+**Line 33:** This loops through our `$ImagesToRemove` array and delete images in the output folder that match the filename from our array.
+
+## What now
+
+Since Windows Spotlight update it’s images periodically it would make sense to set this script up as a scheduled job or you could run it manually when you feel like.
+
+Also, this script could probably improve. I’m no Powershell master. If you have some improvements, please comment below. I’m eager to learn!
diff --git a/content/posts/rss-feed-urls.md b/content/posts/rss-feed-urls.md
new file mode 100644
index 0000000..f76479c
--- /dev/null
+++ b/content/posts/rss-feed-urls.md
@@ -0,0 +1,38 @@
+---
+title: 'Rss Feed Urls'
+description: ''
+date: '2024-10-24T14:49:24+02:00'
+tags: ['rss', 'internet']
+draft: true
+---
+
+I feel that RSS is on the raise again, and thats a good thing! Before social media became dominant, rss feeds was the best way to keep up-to-date with articles, news etc from various sites that you found interessting. Today, the RSS feed is replaced by your social media feed.
+
+<!--more-->
+
+The problem with that is most of the social media feeds are controlled by the company behind them. They decide (based on non-disclosed algorithms) what is shown to you. Traditionally RSS feed readers would give you the content in chronological order based on when they were published.
+
+I want to get back on the RSS wagon again, but it's not always easy to find out if a website supports RSS. So I have started to collect a list of well-known sites and how to get to their RSS feed. Some of these will be specific to Norway.
+
+## Norwegian sites
+
+- VG (https://www.vg.no/rss/feed)
+- NRK (https://www.nrk.no/rss/)
+- TV2 (https://www.tv2.no/rss/)
+- Nettavisen: Alle saker (https://www.nettavisen.no/service/rich-rss)
+- Nettavisen: Nyheter (https://www.nettavisen.no/service/rich-rss?tag=nyheter)
+- Nettavisen: Sport (https://www.nettavisen.no/service/rich-rss?tag=sport)
+- Politiloggen (https://api.politiet.no/politiloggen/v1/rss)
+ - See [https://api.politiet.no/politiloggen/index.html](https://api.politiet.no/politiloggen/index.html) for more options
+- Kode24 (https://rss.kode24.no/)
+
+## International
+
+- Youtube Channel (https://www.youtube.com/feeds/videos.xml?channel_id=CHANNEL_ID)
+- Wordpress sites (https://<domain>/feed)
+- Hacker News (https://news.ycombinator.com/rss)
+- Reddit (https://www.reddit.com/r/SUBREDDIT_NAME.rss)
+- Wired (https://www.wired.com/feed/rss)
+- Vice (https://www.vice.com/feed/)
+- Phoronix (https://www.phoronix.com/rss.php)
+- Medium (https://medium.com/feed/@HANDLE)
diff --git a/content/posts/servicenow-http-client.md b/content/posts/servicenow-http-client.md
new file mode 100644
index 0000000..acfb3ae
--- /dev/null
+++ b/content/posts/servicenow-http-client.md
@@ -0,0 +1,180 @@
+---
+title: 'ServiceNow HTTP client'
+description: 'A look at the built-in sn_ws.RestMessageV2 HTTP client in ServiceNow'
+date: '2024-11-22T00:00:00+02:00'
+tags: ['servicenow', 'http', 'javascript']
+draft: true
+---
+
+For the past few months I've been building a REST API integration between ServiceNow and another ticketing system. So I thought I write about some of my epxeriences.
+
+## The HTTP client
+
+The first thing to know is that the HTTP client in ServiceNow is not called somewhere near that intuitive. It's called `sn_ws.RESTMessageV2`. At first look it seems to be a class for initiating Outbound REST Messages (System Web Services->Outbound->REST Message), but we can also use it "raw" without setting up a REST Message in the first place.
+
+A simple GET request looks like this. It will fetch you external IP and print the HTTP status code along witht the response body.
+
+```javascript
+(function() {
+ var client = new sn_ws.RESTMessageV2();
+ client.setEndpoint("http://ip.claw0ry.net");
+ client.setHttpMethod("get");
+
+ var result = client.execute();
+ gs.info(result.getStatusCode());
+ gs.info(result.getBody());
+})();
+```
+
+After running this as a background script you should get the following output (though with a different ip ofcourse):
+
+```plaintext
+*** Script: 200
+*** Script: 148.139.0.8
+```
+
+## Sending POST/PATCH requests
+
+To send a POST/PATCH request we change the `setHttpMethod` argument to one of the other methods we want to use. These HTTP methods are often expected to send a request body along with them. We can specify the content type of the request body with `setRequestHeader("Content-Type", "<our type>")` and then specify the raw text with `setRequestBody`. This means that if you have a Javascript object, you need to call `JSON.stringify` in it before passing into the `setRequestBody` function.
+
+```javascript
+(function() {
+ var reqBody = {
+ title: "Hello, World",
+ message: "Yo, my dude!"
+ };
+
+ var client = new sn_ws.RESTMessageV2();
+ client.setEndpoint("https://www.postb.in/1732268930451-4059148549567");
+ client.setHttpMethod("post");
+
+ // we tell the webserver what type our content is exptected to be in
+ client.setRequestHeader("Content-Type", "application/json");
+
+ // setRequestBody cant magically convert types based on content-type,
+ // so we must stringify our js object first
+ client.setRequestBody(JSON.stringify(reqBody));
+
+ var result = client.execute();
+ gs.info(result.getStatusCode());
+ gs.info(result.getBody());
+})();
+```
+
+## Parsing response body
+
+As we had to stringify our JSON request body in the last example, we need to `JSON.parse` the response (if we exptect a JSON result back). If we are qurious about what the response content type is, we can look into the response headers with `getHeader("Content-Type")`.
+
+```javascript
+(function() {
+ var client = new sn_ws.RESTMessageV2();
+ client.setEndpoint("https://jsonplaceholder.typicode.com/todos/1");
+ client.setHttpMethod("get");
+
+ var result = client.execute();
+
+ // check if there are any internal ServiceNow errors
+ if (result.haveError()) {
+ gs.error(result.getErrorCode());
+ gs.error(result.getErrorMessage());
+ return;
+ }
+
+ // if the response body is not JSON format the JSON.parse will fail
+ // therefor its a good practice to check on it. Since the header may
+ // also contain character encoding, we split at ';'
+ var content_type = result.getHeader("Content-Type");
+ if (content_type.split(';')[0] !== "application/json") {
+ gs.error("Unexpected response body from API");
+ return;
+ }
+
+ // we now know the response body should be valid JSON so we can parse it
+ var todos = JSON.parse(result.getBody());
+ gs.info(todos.title);
+})();
+```
+
+The result should be:
+
+```plaintext
+*** Script: delectus aut autem
+```
+
+## Error Handling
+
+```javascript
+(function() {
+ var client = new sn_ws.RESTMessageV2();
+ client.setEndpoint("https://<instance>.service-now.com/stats.do");
+ client.setHttpMethod("get");
+
+ var result = client.execute();
+
+ // check if there are any internal ServiceNow errors
+ if (result.haveError()) {
+ gs.info(result.getErrorCode());
+ gs.info(result.getErrorMessage());
+ return;
+ }
+
+ // http status codes in the range 200-299 is considered a success
+ // any higher is considered an error
+ if (result.getStatusCode() > 299) {
+ var errMessage = [];
+ errMessage.push("ERROR:");
+ errMessage.push(result.getStatusCode() + " - ");
+ errMessage.push(result.getStatusText() + ":");
+ errMessage.push(result.getBody());
+ gs.info(errMessage.join(" "));
+ }
+})();
+```
+
+## Authentication
+
+Often when dealing with external REST API's we need to authenticate. There are several authentication mechanism but most REST API's implements either Basic Authentication or tokens.
+
+### Basic Authentication
+
+I would say most API's today implements some kind of token, but there are some that still allows you to use Basic Authentication (like ServiceNow for example).
+
+Using Basic Authentication in raw form means that you have to set a HTTP Authentication header with a base64 encoded string in the format `<username>:<password>`.
+
+```plain
+// base64encoded("username:password")
+Authentication: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
+```
+Thankfully the sn_ws.RESTMessageV2 class has a built-in function for does this for you.
+
+```javascript
+(function() {
+ var client = new sn_ws.RESTMessageV2();
+ client.setEndpoint("https://<instance>.service-now.com/stats.do");
+ client.setHttpMethod("get");
+
+ // this will automatically convert our username and password into a valid
+ // HTTP Basic Authentication format and attach it to our request
+ client.setBasicAuth("<username>", "<password>");
+
+ var result = client.execute();
+ gs.info(result.getBody());
+})();
+```
+
+### Tokens
+
+There are several ways to obtain tokens depending on the system, but common for them all is that when you have obtained a token you must pass the it in a HTTP Authentication header when making further calls.
+
+```javascript
+(function() {
+ var client = new sn_ws.RESTMessageV2();
+ client.setEndpoint("https://<instance>.service-now.com/stats.do");
+ client.setHttpMethod("get");
+
+ client.setRequestHeader("Authentication", "Bearer <token>");
+
+ var result = client.execute();
+ gs.info(result.getBody());
+})();
+```
diff --git a/content/posts/servicenow-sending-notifications-to-microsoft-teams.md b/content/posts/servicenow-sending-notifications-to-microsoft-teams.md
new file mode 100644
index 0000000..f5d2a3b
--- /dev/null
+++ b/content/posts/servicenow-sending-notifications-to-microsoft-teams.md
@@ -0,0 +1,172 @@
+---
+title: "ServiceNow: Sending notifications to Microsoft Teams"
+description: "Microsoft Teams support different Connections and one of the simpliest is 'Incomming Webhook' which gives you a URL that you can POST to with a correctly configured JSON body and the result will be displayed in your specified channel. In this guide we’ll set this up and POST to it from Service-Now when an incident is created or updated."
+tags: ["servicenow", "javascript"]
+date: 2018-11-29T00:00:00+01:00
+draft: false
+---
+
+Microsoft Teams support different Connections and one of the simpliest is "Incomming Webhook" which gives you a URL that you can POST to with a correctly configured JSON body and the result will be displayed in your specified channel. In this guide we’ll set this up and POST to it from Service-Now when an incident is created or updated.
+
+<!--more-->
+
+## Setting up Connections in Microsoft Teams
+
+To setup an "Incomming Webhook" for a specific channel do the follow:
+
+1. Place your cursor over the channel name and click the three dotts to the right.
+2. Choose "Connections".
+3. Find "Incomming Webhooks" in the list of connectors and click "Configure".
+4. Give the connection a name and picture (optional) and click "Create".
+
+Take note of the URL in the greyed out box starting with "https://outlook.office.com/webhook/..". We will use this in ServiceNow later.
+
+## Creating an "Outbound REST Message" in ServiceNow
+
+To be able to send an outbound REST message in Service-Now you must first create and configure one. In your application sidebar go to **System Web Services → Outbound → REST Message** and create a new REST Message. Fill out the required information.
+
+**Name** – We will refeer to this in our Business Rule later
+**Endpoint** – Paste in the URL from your "Incomming Webhook" in Microsoft Teams
+
+By default Service-Now will only create a GET method for our REST Message. Edit your newly created REST Message and scroll down to "HTTP Methods" and create a new one for POST. Fill out the required information.
+
+**Name** – Just set the name to be ‘POST’ (without quotes)
+**HTTP Method** – This must be set to POST, which is the method Microsoft Teams expects us to use
+**Endpoint** – Paste in the same URL as before
+
+Now our Outbound REST Message is ready and we can create a Business Rule to trigger it.
+
+## Trigger messages with Business Rules in ServiceNow
+
+Now that we have the basics setup, we can create a Business Rule in Service-Now to trigger messages based on our criteria. In this guide we will setup a business rule to trigger when an incident is priority 1 and assigned to the ‘Monitoring’ group. The webhook we setup earlier goes to an ‘Alert’ channel in their team. Browse to **System Definition → Business Rules → New** to create a new business rule.
+
+First we must name our business rule. If you are going to have a lot of Teams and channels I would recommend to use a naming scheme I.e `Teams <team>_<channel> <tag>`. Just remember that the maximum length is 40 characters. So for our business rule it will be "Teams monit_alert pri1".
+
+Then we must define which table to run the business rule against. Since we are targeting incidents, we choose "Incident [incident]".
+
+### When to run
+
+Next, we’re going to setup our filter for when to run under the tab “When to run”. For this demo its going to be a fairly simple filter, but you could make all sorts of advanced filters for when to trigger. Lets have a look at what the different properties does.
+
+#### When
+
+All business rules run on database operations (insert, update, delete or query). Here we can set when we want the business rule to trigger. Before, after or async with database operation. I our demo we will use ‘after’ just so we are sure the data is correctly written in the database before we are sending it outside Service-Now. If we used ‘before’ there might be some other business rule that manipulate the data after our script but before it is written to the database and so the data we have access to might not be the correct one after all.
+
+#### Order
+
+If you have multiple business rule with the same filtering, you can decide in which order you want them to execute. Lowest executes first and highest last. We will keep our default (100).
+
+#### Filter Conditions
+
+This is were most of the power lies. Here we can set our colum ⇒ value filters. In our demo we want to set these conditions:
+
+**Active** is true
+**Assignment group** is ‘Monitoring’
+**Assigned to** is empty
+**Priority** is ‘1 – Critical’
+
+And we want it to execute on both database **inserts** and **updates**. The reason we specify Assigned to to be empty is if we did not the rule would execute everytime the incident is updated, even though it is already assigned.
+
+### Time for some code
+
+At the top of your form, to the right, you will se a checkbox for Advanced. Make sure it’s checked to see the **Advanced** tab. This is where the magic happens. Inside the **Advanced** tab you have a script field where we can enter some code to execute. Here’s the code and I will break it down for you.
+
+```javascript
+(function executeRule(current, previous) {
+
+ if (current == null) {
+ gs.log("OUTBOUND REST - Teams monit_alert pri1 - Current is not defined");
+ return;
+ }
+
+ var requestBody;
+ var responseBody;
+ var status;
+ var r;
+ var desc = current.description.toString().replace(/(?:\r\n|\r|\n)/g, ' ');
+ var shortened_desc = (desc.length > 140) ? desc.substring(0, 140) + "[...]" : desc;
+ var link_appl = encodeURI("https://<instance>.service-now.com/nav_to.do?uri=%2F" + current.sys_class_name + ".do%3Fsys_id%3D" + current.sys_id);
+
+ var body = {
+ "@type": "MessageCard",
+ "@context": "http://schema.org/extensions",
+ "themeColor": "0076D7",
+ "summary": "New incident has been opened",
+ "sections": [{
+ "activityTitle": "New Incident has been opened",
+ "activitySubtitle": "Monitoring",
+ "activityImage": "https://<instance>.service-now.com/<your_logo>.png",
+ "facts": [{
+ "name": "Case ID",
+ "value": current.number.toString()
+ },
+ {
+ "name": "Title",
+ "value": current.short_description.toString()
+ },
+ {
+ "name": "Company",
+ "value": current.company.name.toString()
+ },
+ {
+ "name": "Description",
+ "value": shortened_desc
+ }],
+ "markdown": true
+ }],
+ "potentialAction": [{
+ "@type": "OpenUri",
+ "name": "View in Service-Now",
+ "targets": [
+ {
+ "os": "default",
+ "uri": link_appl
+ }
+ ]
+ }]
+ };
+
+ try {
+ r = new sn_ws.RESTMessageV2("Teams monit_alert pri1", "post");
+ r.setRequestBody(JSON.stringify(body));
+
+ resp = r.execute();
+ responseBody = resp.haveError() ? resp.getErrorMessage() : resp.getBody();
+ status = resp.getStatusCode();
+ } catch(ex) {
+ responseBody = ex.getMessage();
+ status = '500';
+ } finally {
+ requestBody = r ? r.getRequestBody() : null;
+ }
+
+ gs.log("OUTBOUND REST - Teams monit_alert pri1 - Request Body: " + requestBody);
+ gs.log("OUTBOUND REST - Teams monit_alert pri1 - Response: " + responseBody);
+ gs.log("OUTBOUND REST - Teams monit_alert pri1 - HTTP Status: " + status);
+
+})(current, previous);
+```
+
+**Line 3-6:** This just makes sure that we actually have a `current` object. This is where the incident details are stored and we can make changes to. If it’s not set we’re aborting the whole thing.
+
+**Line 12-13:** Here we are removing any newlines from the description and only includes the first 140 characters. Note that these changes are not written to database, just in the REST message.
+
+**Line 14:** This will generate a URL so that we can link to the incident in Service-Now. Replace `<instance>` with your instance name.
+
+**Line 16-53:** This is a JSON object that defines the look and content of our Microsoft Teams message. This is Microsoft Teams specifics and you can read more about it [here](https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/cards/cards) and see some examples and test your JSON object [here](https://messagecardplayground.azurewebsites.net/). Remember to replace `<instance>` with your instance name and `<your_logo>` to your logo path in the _ActivityImage_ property.
+
+One important thing to notice is that we need to append `.toString()` when referring to attributes in the `current` object because they return objects and will not render our JSON correctly when we conver it to text later. You could also assign the `current` attributes to a variable and then refeer to the variable in the JSON, but thats just unnecessary code in my opinion.
+
+**Line 55-67:** This is where we actually send the message to Microsoft Teams and we have encapsualted it in a try/catch statement. On line 56 we instantiate a new REST message object and refeer to the REST Message we created earlier. Use the name as the first parameter and then which HTTP method to use.
+
+Next we convert our JSON to a string and pass it to the request body on line 57. Then we execute the request on line 59 and then store our response on line 60 and 61. Response code ‘200’ indicates that the request was successful.
+
+**Line 69-71:** Here we just print our script log so we can debug if its not working. Your script log is located here **System Logs → System Log → Script Log Statement**.
+
+### Finalize it
+
+Once our filter is set and we have inserted our script we just need to save or update the business rule. Make sure that **Active** is checked.
+
+## Testing
+
+Now that we have everything setup we can start testing it. Lets create a new incident with a caller, priority 1, assigned to group ‘Monitoring’, no assignee and a dummy short description and description. If everything was correctly setup you should immediatly see the card in your channel of choice in Microsoft Teams with the information you provided when creating the incident.
diff --git a/content/posts/setting_up_puppet_lab_with_virtual_box.md b/content/posts/setting_up_puppet_lab_with_virtual_box.md
new file mode 100644
index 0000000..d8d2102
--- /dev/null
+++ b/content/posts/setting_up_puppet_lab_with_virtual_box.md
@@ -0,0 +1,291 @@
+---
+title: 'Setting up puppet lab with virtual box'
+date: 2022-07-08T08:48:10+02:00
+draft: true
+---
+
+In this post we'll set up a nice little lab for getting started with Puppet. My choice of hypervisor is VirtualBox, but you can also use VMWare or Hyper-V.
+
+<!--more-->
+
+## Setting up our network in VirtualBox
+
+For this lab we're going to use a NAT Network, which in VirtualBox means that the virtual machines can talk to eachother, the host AND the internet.
+
+1. Open VirtualBox and **Preferences**
+2. Go to **Network** tab
+3. Click the **+** icon to add a new NAT Network
+4. Double click on the created NAT Network
+5. Change **Network Name** to "PuppetLab"
+6. Change **Network CIDR** to `10.10.10.0/24`
+7. Click **OK**
+
+Or you can issue these commands.
+
+```bash
+# add new natnetwork
+VBoxManage netnetwork add --netname PuppetLab --network "10.10.10.0/24" --dhcp on --enable
+
+# to verify that our natnetwork was created
+VBoxManage list natnetworks
+
+# if you need to remove a natnetwork configuration
+VBoxManage natnetwork remove --netname <name>
+```
+
+## Setting up a base image
+
+We are going to start with setting up a base image/machine that has the core tools needed. This way we can clone the base machine when we need a new one, instead of going through the whole installation process from scratch every time.
+
+### Create a new virtual machine
+
+1. Create a new virtual machine in Virtual Box
+2. Type will be Linux/Ubuntu(64-bit)
+3. Give it a dynamically allocated harddrive of 20 GB
+4. Download and mount ubuntu 20.04 ISO
+5. Set the "PuppetLab" NAT Network as in the **Network** tab
+6. Start the machine
+7. Run through the installer with defaults, but make sure to check for "Install Open SSH server"
+
+### Operating system setup
+
+When the installer has finished and rebooted we'll login and start configuring our base.
+
+#### Update all packages
+
+Make sure our packages are up-to-date.
+
+```bash
+sudo apt update && sudo apt upgrade -y
+```
+
+#### Add the puppet platform on apt
+
+Enable the Puppet platform on Apt.
+
+Source: [Installing Puppet](https://puppet.com/docs/puppet/7/install_puppet.html#enable_the_puppet_platform_apt)
+
+```bash
+wget https://apt.puppet.com/puppet7-release-focal.deb
+sudo dpkg -i puppet7-release-focal.deb
+sudo apt update
+```
+
+#### Install NTP
+
+Install NTP for time syncing.
+
+```bash
+sudo apt install ntp
+```
+
+#### Add puppet master IP to hosts file
+
+Instead of manually adding the Puppet master's IP address to `/etc/hosts` each time, we just add it to our base since every Puppet agent will need it. We will setup the Puppet master later with this IP address.
+
+```bash
+sudo su
+echo '10.10.10.101 puppet' >> /etc/hosts
+```
+
+## Setting up the Puppet master
+
+Start off by cloning our base machine.
+
+1. Right click on the base machine in VirtualBox
+2. Choose **Clone...**
+3. In the dialog change the following settings:
+ 1. **Name:** puppet-master
+ 2. **MAC Address Policy:\*** Generate new MAC addresses for all network adapters
+4. Click **Continue** and then **Clone**
+5. Start your puppet-master machine
+
+### Set a static IP address
+
+Our Puppet master needs a static IP address so that it always has the same IP address. Remember we added the `10.10.10.101` address to `/etc/hosts` file for our base. So the Puppet master must have a static IP as `10.10.10.101`.
+
+First we need to find our gateway and network card name.
+
+```bash
+$ ip r s
+default via 10.10.10.1 dev enp0s3 proto dhcp src 10.10.10.4 metric 100
+10.10.10.0/24 dev enp0s3 proto kernel scope link src 10.10.10.4
+10.10.10.1 dev enp0s3 proto dhcp scope link src 10.10.10.4 metric 100
+```
+
+`default via 10.10.10.1` means that the traffic goes via `10.10.10.1` which again means that this is our gateway. `enp0s3` is our network card device name. This will be different from hypervisor to hypervisor.
+
+Ubuntu 20.04 uses Netplan as the default network management tool, so we need to edit the `.yaml` file under `/etc/netplan`. On my machine its `/etc/netplan/00-installer-config.yaml`. On your machine it might be something else, usually either one of:
+
+- `/etc/netplan/00-installer-config.yaml`
+- `/etc/netplan/50-cloud-init.yaml`
+- `/etc/netplan/01-netcfg.yaml`
+
+Open up the file in `vi` or `nano` and edit the following:
+
+```yaml
+network:
+ version: 2
+ ethernets:
+ enp0s3: # Replace with the name of your network card
+ dhcp4: false
+ addresses:
+ - 10.10.10.101/24
+ gateway4: 10.10.10.1
+ nameservers:
+ addresses: [8.8.8.8, 1.1.1.1]
+```
+
+Then run `sudo netplan apply` to apply the changes and `ip addr show dev enp0s3` to show that the new IP address has been set.
+
+### Set hostname
+
+Our Puppet master needs a new hostname.
+
+```bash
+sudo hostnamectl set-hostname puppet
+```
+
+And then reboot the machine.
+
+```bash
+sudo reboot
+```
+
+### Install the puppetserver
+
+Since we already added the puppet platform to Apt in our base machine, we can just go ahead and install it through apt.
+
+```bash
+# install puppetserver
+sudo apt install puppetserver
+
+# reload bash to update $PATH
+bash -l
+
+# verify that we see the puppetserver binary
+puppetserver -v
+```
+
+### Lower the Java Heap size for the Puppet Server service
+
+Since we are experimenting with this on a low end virtual machine, we must lower the Java Heap size so that it doesn't allocate as much memory. The default is 2GB of RAM, but our VM only has 1GB.
+
+Open up `/etc/default/puppetserver` and change the following:
+
+```
+# Modify this
+JAVA_ARGS="-Xms2g -Xmx2g"
+
+# To look like this
+JAVA_ARGS="-Xms512m -Xmx512m"
+```
+
+This will change the puppetserver to only allocate 512MB.
+
+Now reboot the machine again.
+
+### Enable the puppetserver service
+
+NOTE: this step must excuted after changing the Java Heap size or else the puppetserver service will fail because of too little RAM.
+
+```bash
+sudo systemctl enable --now puppetserver
+```
+
+## Setting up a Puppet agent
+
+As with the Puppet master we'll clone from our base machine. When done, power up the machine.
+
+### Set a static IP address
+
+Go ahead and do the same as with puppet master, but use the IP address `10.10.10.111`.
+
+### Set hostname
+
+```bash
+sudo hostnamectl set-hostname agent01
+sudo reboot
+```
+
+### Verify that we can contact Puppet master
+
+To verify that our puppet agent server can contact and communicate with our puppet master server, we can simply ping it. Remember we set `puppet` to resolve to `10.10.10.101` in our `/etc/hosts` file. Use `ctrl+c` to cancel ping.
+
+```bash
+$ ping puppet
+PING puppet (10.10.10.101) 56(84) bytes of data.
+64 bytes from puppet (10.10.10.101): icmp_seq=1 ttl=64 time=0.604 ms
+64 bytes from puppet (10.10.10.101): icmp_seq=2 ttl=64 time=0.437 ms
+64 bytes from puppet (10.10.10.101): icmp_seq=3 ttl=64 time=0.383 ms
+^C
+--- puppet ping statistics ---
+3 packets transmitted, 3 recieved, 0% packet loss, time 2004ms
+rtt min/avg/max/mdev = 0.383/0.474/0.604/0.094 ms
+```
+
+### Setting up puppet-agent
+
+Since we already added the Puppet platform to apt we can go ahead and install the puppet-agent.
+
+```bash
+sudo apt install puppet-agent
+
+# reload bash to update $PATH
+bash -l
+
+# verify that we see the puppet binary
+which puppet
+```
+
+## Exercise 01
+
+Now, as an exercise try to add another server and install and configure puppet agent.
+
+## Setting up CA
+
+On both the agents run `sudo /opt/puppetlabs/bin/puppet agent -t`. You should see something like this.
+
+```bash
+Info: Creating a new RSA SSL key for agent01
+Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
+Info: Creating a new SSL certificate request for agent01
+Info: Certificate Request fingerprint (SHA256): 27.24:61:E0:2E:D1:14:D5:9C:B0:B2:D1:83:B6:36:E9:CC:18:5D:AB:FF:3B:CB:E7:C7:7B:F0:7E:44:D4:CF:D8
+Info: Certificate for agent01 has not been signed yet
+Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (agent01).
+Exiting now because the waitforcert setting is set to 0.
+```
+
+Now on the puppet master server run the following, as root or with sudo, to list all certificate requests.
+
+```bash
+$ sudo /opt/puppetlabs/bin/puppetserver ca list
+agent01 (SHA256) <fingerprint>
+agent02 (SHA256) <fingerprint>
+```
+
+To authorize the certificate for `agent01` we can run this command from the puppet master.
+
+```bash
+$ sudo /opt/puppetlabs/bin/puppetserver ca sign --certname agent01
+Successfully signed certificate request for agent01
+```
+
+Then go back to **agent01** and run `sudo /opt/puppetlabs/bin/puppet agent -t` again and you should see something likes this.
+
+```bash
+Info: csr_attributes file loading from /etc/puppetlabs/puppet/csr_attributes.yaml
+Info: Creating a new SSL certificate request for agent01
+Info: Certificate Request fingerprint (SHA256): 27.24:61:E0:2E:D1:14:D5:9C:B0:B2:D1:83:B6:36:E9:CC:18:5D:AB:FF:3B:CB:E7:C7:7B:F0:7E:44:D4:CF:D8
+Info: Downloaded certificate for agent01 from https://puppet:8140/puppet-ca/v1
+Info: Using environment 'production'
+Info: Retrieving pluginfacts
+Info: Retrieving plugin
+Info: Caching catalog for agent01
+Info: Applying configuration version '1657279166'
+Notice: Applied catalog in 0.01 seconds
+```
+
+> NOTE: If you get the `Notice: Run of Puppet configuration client already in progress; [...]` just simply try again shortly.
+
+Now do the same for **agent02**.
diff --git a/content/posts/simple-url-shortner-with-powershell-and-azure-functions.md b/content/posts/simple-url-shortner-with-powershell-and-azure-functions.md
new file mode 100644
index 0000000..b2e9642
--- /dev/null
+++ b/content/posts/simple-url-shortner-with-powershell-and-azure-functions.md
@@ -0,0 +1,66 @@
+---
+title: 'Simple Url Shortner With Powershell and Azure Functions'
+date: 2022-02-07T15:03:12+01:00
+draft: true
+---
+
+In this article we're going to setup a simple url shortner written in Powershell, hosted with Azure Functions and Azure Table Storage for persistence.
+
+<!--more-->
+
+## 1. Creating our resources in Azure
+
+### 1.1 Connect to Azure
+
+```pwsh
+Connect-AzAccount
+```
+
+### 1.1 Resource Group
+
+```powershell
+$resourceGroup = New-AzResourceGroup -Name "simple-url-shortner" -Location "westeurope"
+```
+
+### 1.2 Azure Storage Account
+
+```powershell
+$storageAccount = New-AzStorageAccount -ResourceGroupName $resourceGroup.ResourceGroupName `
+ -Name "simpleurlshorner001" `
+ -SkuName "Standard_LRS" `
+ -Location "westeurope"
+```
+
+### 1.3 Azure Functions
+
+```powershell
+$funcApp = New-AzFunctionApp -Name "simpleurlshortner001" `
+ -ResourceGroupName $resourceGroup.ResourceGroupName `
+ -StorageAccount $storageAccount.StorageAccountName `
+ -Runtime "Powershell" `
+ -FunctionsVersion 3 `
+ -Location "westeurope"
+
+```
+
+## 2. Building our function
+
+```bash
+func init simpleurlshortner --powershell
+cd simpleurlshortner
+func new --name URLHandler --template "HTTP Trigger" --authlevel "anonymous"
+```
+
+### 2.1 Create short url
+
+### 2.2 Lookup short url
+
+## 3. Testing
+
+```bash
+func start
+```
+
+## Resources
+
+- [https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-powershell?tabs=azure-powershell%2Cbrowser](https://docs.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-powershell?tabs=azure-powershell%2Cbrowser)
diff --git a/content/posts/using_go_vanity_url_with_cgit.md b/content/posts/using_go_vanity_url_with_cgit.md
new file mode 100644
index 0000000..6eca271
--- /dev/null
+++ b/content/posts/using_go_vanity_url_with_cgit.md
@@ -0,0 +1,46 @@
+---
+title: 'Using Go vanity URL with cgit'
+description: 'How to setup Go vanity URLs with cgit to allow custom domain packages'
+date: '2024-10-25T16:00:00+02:00'
+tags: ['debian', 'cgit', 'nginx', 'git', 'go']
+---
+
+This year I wanted to experiment with moving away from Github (and other cloud based VCS). So I setup my own server with [git](https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server), [cgit](https://git.zx2c4.com/cgit/) as the interface and [nginx](https://nginx.org/) as the web server. The basic setup was fairly easy.
+
+<!--more-->
+
+I recently published one of my tools (snowy) and when I tried to install it from my repository I got an error message:
+
+```
+unrecognized import path "git.claw0ry.net/snowy" (parse https://git.claw0ry.net/snowy?go-get=1: no go-import meta tags ())
+```
+
+I went to the internet and did some research. Apparently, Go treats github.com (and other popular vcs) specially, so if we want to our custom domain and cgit hosted repository to work with `go install` and `go get` we need to do some extra work on our end. You can read the official documentation about the subject here: [https://pkg.go.dev/cmd/go#hdr-Remote_import_paths](https://pkg.go.dev/cmd/go#hdr-Remote_import_paths)
+
+Basically what we need is an HTML meta-tag to tell go how to map a package name to a repository. For instance, go does not know that my package name `git.claw0ry.net/snowy` should resolve to `https://git.claw0ry.net/snowy` by itself.
+
+This is the tag that go will look for.
+
+```
+<meta name="go-import" content="git.claw0ry.net/snowy git https://git.claw0ry.net/snowy">
+```
+
+To use this meta-tag cgit has a `repo.extra-head-content` option that we can use to inject the meta-tag. The downside of this is that it must be done pr repository. Instead we can use NGINX `sub_filter` module to inject this meta-tag on very repository.
+
+So in your NGINX configuration for our domain we will replace a part of the HTML before returning it.
+
+```conf
+server {
+ server_name git.claw0ry.net;
+ # ...
+
+ location / {
+ # ...
+
+ sub_filter '</head>' '<meta name="go-import" content="$host$uri git https://$host$uri"></head>';
+ sub_filter_once on;
+ }
+}
+```
+
+The next time you do a `go install` it should work.
diff --git a/content/posts/web_requests_with_basic_authentication_in_powershell.md b/content/posts/web_requests_with_basic_authentication_in_powershell.md
new file mode 100644
index 0000000..3fec075
--- /dev/null
+++ b/content/posts/web_requests_with_basic_authentication_in_powershell.md
@@ -0,0 +1,77 @@
+---
+title: 'Web requests with basic authentication in Powershell'
+date: 2022-07-05T00:00:00
+draft: false
+---
+
+HTTP Basic Authentication is one of many authentication schemes supported by the HTTP protocol, and is a very common option when authenticating to a web service. The basic authentication scheme is very simple and consists of generating a base64 token from your username and password seperated by a colon (`:`) and putting the token in an `Authorization` HTTP header. Let's explore some examples in Powershell.
+
+<!--more-->
+
+## Manually creating the token
+
+Let's start with an example from scratch.
+
+```powershell {linenos=inline}
+# We define our username and password. Ideally this should come from environment variables
+# or some secret store
+$username = "user1"
+$password = "pa55w0rd!"
+
+# Join them into a single string, seperated by a colon (:)
+$pair = "{0}:{1}" -f ($username, $password)
+
+# Turn the string into a base64 encoded string
+$bytes = [System.Text.Encoding]::ASCII.GetBytes($pair)
+$token = [System.Convert]::ToBase64String($bytes)
+
+# Define a basic 'Authorization' header with the token
+$headers = @{
+ Authorization = "Basic {0}" -f ($token)
+}
+
+# Send a web request using our authorization header
+$response = Invoke-RestMethod -Uri "https://example.com/api" -Headers $headers
+```
+
+As you can see from the example above, we take our username and password and combine them into a single string seperated by a colon (`:`). Then we take that string and turn it into a Base64 encoded string. This is our token that we need to pass into the `Authorization` header. Our token will look like this.
+
+```plaintext
+dXNlcjE6cGE1NXcwcmQh
+```
+
+Line 14-16 is were we create a custom header object to send with our request. Here we define the `Authorization` header and we tell it to use `Basic` authorization and then we provide our token.
+
+On the last line we send our request with the custom header.
+
+## The powershell way
+
+Since Basic Authentication is so common, Powershell has of course implemented a simpler solution.
+
+```powershell {linenos=inline}
+# Again, these should come from env vars, Key Vault or some other secret store
+$username = "user1"
+$password = "pa55w0rd!"
+
+# Since our password is plaintext we must convert it to a secure string
+# before creating a PSCredential object
+$securePassword = ConvertTo-SecureString -String $password -AsPlainText
+$credential = [PSCredential]::new($username, $securePassword)
+
+# Tell Invoke-RestMethod to use Basic Authentication scheme with our credentials
+$response = Invoke-RestMethod -Uri "https://example.com/api" -Authentication Basic -Credential $credential
+```
+
+This works by telling the `Invoke-RestMethod` cmdlet which authentication scheme we want to use and provide a `PSCredential` object and it will do the rest for us.
+
+This way is much simpler because we dont need to worry about generating the token and in many situations we already have a `PSCredential` object.
+
+## Conclusion
+
+Knowing how to use basic authentication with Powershell can be very handy since most systems support this authentication scheme. As we saw in this article Powershell has made it really simple to use.
+
+## Resources
+
+- [RFC 7617](https://datatracker.ietf.org/doc/html/rfc7617)
+- [Wikipedia: Basic access authentication](https://en.wikipedia.org/wiki/Basic_access_authentication)
+- [Reference: Invoke-RestMethod](https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/invoke-restmethod?view=powershell-7.2)
diff --git a/content/posts/working_with_comments_and_work_notes_in_servicenow.md b/content/posts/working_with_comments_and_work_notes_in_servicenow.md
new file mode 100644
index 0000000..a13d469
--- /dev/null
+++ b/content/posts/working_with_comments_and_work_notes_in_servicenow.md
@@ -0,0 +1,30 @@
+---
+title: "Working with comments and work notes in ServiceNow"
+description: "Additional comments and Work notes is of type Journal List and therefor we can not get their value directly. So here's how to interact with comments and work_notes."
+tags: ['servicenow', 'javascript']
+date: 2021-08-17T10:13:04+02:00
+draft: false
+---
+
+Additional comments and Work notes is of type `Journal List` and therefor we can not get their value directly. So here's how to interact with `comments` and `work_notes`.
+
+<!--more-->
+
+```javascript
+(function(current) {
+
+ // Get the latest entry
+ var lastComment = current.comments.getJournalEntry(1);
+ var lastWorkNote = current.work_notes.getJournalEntry(1);
+
+ // Get all entries
+ var allComments = current.comments.getJournalEntry(-1);
+ var allWorkNotes = current.work_notes.getJournalEntry(-1);
+
+})(current);
+```
+
+## Official documentation
+
+- [getJournalEntry(Number mostRecent) | GlideElement | ServiceNow Developers](https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/no-namespace/c_GlideElementScopedAPI#SGE-getJournalEntry_N)
+- [GlideElement | ServiceNow Developers](https://developer.servicenow.com/dev.do#!/reference/api/quebec/server/no-namespace/c_GlideElementScopedAPI)