Starbeamrainbowlabs

Stardust
Blog


Archive


Mailing List Articles Atom Feed Comments Atom Feed Twitter Reddit Facebook

Tag Cloud

3d 3d printing account algorithms android announcement architecture archives arduino artificial intelligence artix assembly async audio automation backups bash batch blender blog bookmarklet booting bug hunting c sharp c++ challenge chrome os cluster code codepen coding conundrums coding conundrums evolved command line compilers compiling compression conference conferences containerisation css dailyprogrammer data analysis debugging defining ai demystification distributed computing dns docker documentation downtime electronics email embedded systems encryption es6 features ethics event experiment external first impressions freeside future game github github gist gitlab graphics guide hardware hardware meetup holiday holidays html html5 html5 canvas infrastructure interfaces internet interoperability io.js jabber jam javascript js bin labs latex learning library linux lora low level lua maintenance manjaro minetest network networking nibriboard node.js open source operating systems optimisation outreach own your code pepperminty wiki performance phd photos php pixelbot portable privacy problem solving programming problems project projects prolog protocol protocols pseudo 3d python reddit redis reference release releases rendering research resource review rust searching secrets security series list server software sorting source code control statistics storage svg systemquery talks technical terminal textures thoughts three thing game three.js tool tutorial twitter ubuntu university update updates upgrade version control virtual reality virtualisation visual web website windows windows 10 worldeditadditions xmpp xslt

Inter-process communication between Javascript and Python

Often, different programming languages are good at different things. To this end, it is sometimes desirable to write different parts of a program in different languages. In no situation is this more apparent than with an application implementing some kind of (ethical, I hope) AI-based feature.

I'm currently kinda-sort-mayenbe thinking of implementing a lightweight web interface backed by an AI model (more details if the idea comes to fruition), and while I like writing web servers in Javascript (it really shines with asynchronous input/output), AI models generally don't like being run in Javascript very much - as I have mentioned before, Tensorflow.js has a number of bugs that mean it isn't practically useful for doing anything serious with AI.

Naturally, the solution then is to run the AI stuff in Python (yeah, Python sucks - believe me I know) since it has the libraries for it, and get Javascript/Node.js to talk to the Python subprocess via inter-process communication, or IPC.

While Node.js has a fanceh message-passing system it calls IPC, this doesn't really work when communicating with processes that don't also run Javascript/Node.js. To this end, the solution is to use the standard input (stdin) and standard output (stdout) of the child process to communicate:

A colourful diagram of the IPC setup implemented in this post. Node.js, Python, and Terminal are 3 different coloured boxes. Python talks to Node.js via stdin and stdout as input and output respectively. Python's stderr interacts direct with the terminal, as does Node.js' stdin, stdout, stderr.

(Above: A diagram of how the IPC setup we're going for works. Editing file)

This of course turned out to be more nuanced and complicated than I expected, so I thought I'd document it here - especially since the Internet was very unhelpful on the matter.

Let's start by writing the parent Node.js script. First, we need to spawn that Python subprocess, so let's do that:

import { spawn } from 'child_process';
const python = spawn("path/to/child.py", {
    stdio: [ "pipe", "pipe", "inherit" ]
});

...where we set stdin and stdout to pipe mode - which let's us interact with the streams - and the standard error (stderr) to inherit mode, which allows it to share the parent process' stderr. That way errors in the child process propagate upwards and end up in the same log file that the parent process sends its output to.

If you need to send the Python subprocess some data to start with, you have to wait until it is initialised to send it something:

python.on(`spawn`, () => {
    console.log(`[node:data:out] Sending initial data packet`);
    python.stdin.write(`start\n`);
});

...an easier alternative than message passing for small amounts of data would be to set an environment variable when you call child_process.spawn - i.e. env: { key: "value" } in the options object above.

Next, we need to read the response from the Python script. Let's do that next:

import nexline from 'nexline'; // Put this import at the top of the file

const reader = nexline({
    input: python.stdout,
})

for await(const line of reader) {
    console.log(`[node:data:in] ${line}`)
}

The simplest way to do this would be to listen for the data event on python.stdout, but this does not guarantee that each chunk that arrives is actually a line of data, since data between processes is not line-buffered like it is when displaying content in the terminal.

To fix this, I suggest using one of my favourite npm packages: nexline. Believe it or not, handling this issue efficiently with minimal buffering is a lot more difficult than it sounds, so it's just easier to pull in a package to do it for you.

With a nice little for await..of loop, we can efficiently read the responses from the Python child process.

If you were doing this for real, I would suggest wrapping this in an EventEmitter (Node.js) / EventTarget (WHAT WG browser spec, also available in Node.js).

Python child process

That's basically it for the child process, but what does the Python script look like? It's really quite easy actually:

import sys

sys.stderr.write(f"[python] hai\n")
sys.stderr.flush()

count = 0
for line in sys.stdin:
    sys.stdout.write(f"boop" + str(count) + "\n")
    sys.stdout.flush()
    count += 1

Easy! We can simply iterate sys.stdin to read from the parent Node.js process.

We can write to sys.stdout to send data back to the parent process, but it's important to call sys.stdout.flush()! Node.js doesn't have an equivalent 'cause it's smart, but in Python it may not actually send the response until who-know-when (if at all) unless you call .flush() to force it to. Think of it as batching graphics draw calls to increase efficiency, but in this case it doesn't work in our favour.

Conclusion

This is just a quick little tutorial on how to implement Javascript/Node.js <--> Python IPC. We deal im plain-text messages here, but I would recommend using JSON - JSON.stringify()/JSON.parse() (Javascript) | json.dumps() / json.loads (Python) - to serialise / deserialise messages to ensure robustness. JSON by default contains no newline characters and escapes any present into \n, so it should be safe in this instance.

See also JSON Lines, a related specification.

Until next time!

Code

index.mjs:

#!/usr/bin/env node
"use strict";

import { spawn } from 'child_process';
import nexline from 'nexline';

///
// Spawn subprocess
///
const python = spawn("/tmp/x/child.py", {
    env: {  // Erases the parent process' environment variables
        "TEST": "value"
    },
    stdio: [ "pipe", "pipe", "inherit" ]
});

python.on(`spawn`, () => {
    console.log(`[node:data:out] start`);
    python.stdin.write(`start\n`);
});

///
// Send stuff on loop - example
///
let count = 0;
setInterval(() => {
    python.stdin.write(`interval ${count}\n`);
    console.log(`[node:data:out] interval ${count}`);
    count++;
}, 1000);


///
// Read responses
///
const reader = nexline({
    input: python.stdout,
})

for await(const line of reader) {
    console.log(`[node:data:in] ${line}`)
}

child.py:

#!/usr/bin/env python3
import sys

sys.stderr.write(f"[python] hai\n")
sys.stderr.flush()

count = 0
for line in sys.stdin:
    # sys.stderr.write(f"[python:data:in] {line}\n")
    # sys.stderr.flush()

    sys.stdout.write(f"boop" + str(count) + "\n")
    sys.stdout.flush()
    count += 1

Encrypting and formatting a disk with LUKS + Btrfs

Hey there, a wild tutorial appeared! This is just a quick one for self-reference, but I hope it helps others too.

The problem at hand is that of formatting a data disk (if you want to format your root / disk please look elsewhere - it usually has to be done before or during installation unless you like fiddling around in a live environment) with Btrfs.... but also encrypting the disk, which isn't something that Btrfs natively supports.

I'm copying over some data to my new lab PC, and I've decided to up the security on the data disk I store my research data on.

Unfortunately, both GParted and KDE Partition Manager were unable to help me (the former not supporting LUKS, and the latter crashing with a strange error), so I ended up looking through more posts that should be reasonable to find a solution that didn't involve encrypting either / or /boot.

It's actually quite simple. First, find your disk's name via lsblk, and ensure you have created the partition in question. You can format it with anything (e.g. using the above) since we'll be overwriting it anyway.

Note: You may need to reboot after creating the partition (or after some of the below) if you encounter errors, as Linux sometimes doesn't like new partitions appearing out of the blue with names that were used previously on that boot very much.

Then, format it with LUKS, the most common encryption scheme on Linux:

sudo cryptsetup luksFormat /dev/nvmeXnYpZ

...then, formatting with Btrfs is a 2-step process. First we hafta unlock the LUKS encrypted partition:

sudo cryptsetup luksOpen /dev/nvme0n1p1 SOME_MAPPER_NAME

...this creates a virtual 'mapper' block device we can hit like any other normal (physical) partition. Change SOME_MAPPER_NAME to anything you like so long as it doesn't match anything else in lsblk/df -h and also doesn't contain spaces. Avoid unicode/special characters too, just to be safe.

Then, format it with Btrfs:

sudo mkfs.btrfs --metadata single --data single --label "SOME_LABEL" /dev/mapper/SOME_MAPPER_NAME

...replacing SOME_MAPPER_NAME (same value you chose earlier) and SOME_LABEL as appropriate. If you have multiple disks, rinse and repeat the above steps for them, and then bung them on the end:

sudo mkfs.btrfs --metadata raid1 --data raid1 --label "SOME_LABEL" /dev/mapper/MAPPER_NAME_A /dev/mapper/MAPPER_NAME_B ... /dev/mapper/MAPPER_NAME_N

Note the change from single to raid1. raid1 stores at least 2 copies on different disks - it's a bit of a misnomer as I've talked about before.

Now that you have a kewl Btrfs-formatted partition, mount it as normal:

sudo mount /dev/mapper/SOME_MAPPER_NAME /absolute/path/to/mount/point

For Btrfs filesystems with multiple disks, it shouldn't matter which source partition you pick here as Btrfs should pick up on the other disks.

Automation

Now that we have it formatted, we don't want to hafta keep typing all those commands again. The simple solution to this is to create a shell script and put it somewhere in our $PATH.

To do this, we should ensure we have a robust name for the disk instead of /dev/nvme, which could point to a different disk in future if your motherboard or kernel decides to present them in a different order for a giggle. That's easy by looking over the output of blkid and cross-referencing it with lsblk and/or df -h:

sudo lsblk
sudo df -h
sudo blkid # → UUID

The number you're after should be in the UUID="" field. The shell script I came up with is short and sweet:

#!/usr/bin/env bash
disk_id="ID_FROM_BLKID";
mapper_name="SOME_NAME";
mount_path="/absolute/path/to/mount/dir";

sudo cryptsetup luksOpen "/dev/disk/by-uuid/${disk_id}" "${mapper_name}";
sudo mount "/dev/mapper/${mapper_name}" "${mount_path}"

Fill in the values as appropriate:

  • disk_id: The UUID of the disk in question from blkid.
  • mapper_name: A name of your choosing that doesn't clash with anything else in /dev/mapper on your system
  • mount_path: The absolute path to the directory that you want to mount into - usually in /mnt or /media.

Put this script in e.g. $HOME/.local/bin or somewhere else in $PATH that suits you and your setup. Don't forget to run chmod +x path/to/script!

Conclusion

We've formatted an existing partition with LUKS and Btrfs, and written a quick-and-dirty shell script to semi-automate the process of mounting it here.

If this has been useful or if you have any suggestions, please do leave a comment below!

Sources and further reading

Defining AI: Sequenced models || LSTMs and Transformers

Hi again! I'm back for more, and I hope you are too! First we looked at word embeddings, and then image embeddings. Another common way of framing tasks for AI models is handling sequenced data, which I'll talk about briefly today.

Banner showing the text 'Defining AI' on top on translucent white vertical stripes against a voronoi diagram in white against a pink/purple background. 3 progressively larger circles are present on the right-hand side.

Sequenced data can mean a lot of things. Most commonly that's natural language, which we can split up into words (tokenisation) and then bung through some word embeddings before dropping it through some model.

That model is usually specially designed for processing sequenced data, such as an LSTM (or it's little cousin the GRU, see that same link). An LSTM is part of a class of models called Recurrent Neural Networks, or RNNs. These models work by feeding their output back in to themselves again along with the next element of input, thereby processing a sequence of items. The output bit that feeds back in is basically the memory of the thing that remembers stuff across multiple items in the sequence. The model can decide to add or remove things from this memory at each iteration, which is how it learns.

The problem with this class of model is twofold: firstly, it processes everything serially which means we can't compute it in parallel, and secondly if the sequence of data it processes is long enough it will forget things from earlier in the sequence.

In 2017 a model was invented that solves at least 1 of these issues: the Transformer. Instead of working like a recurrent network, a Transformer processes an entire sequence at once, and encodes a time signal (the 'positional embedding') into the input so it can keep track of what was where in the sequence - a downside to processing in parallel means you don't know which element of the sequence was where.

The transformer model also brought with it the concept of 'attention', which is where the model can decide what parts of the input data are important at each step. This helps the transformer to focus on the bits of the data that are relevant to the task at hand.

Since 2017, the number of variants of the original transformer has exploded, which stands testament to the popularity of this model architecture.

Regarding inputs and outputs, most transformers will take an input in the same shape as the word embeddings in the first post in this series, and will spit it out in the same shape - just potentially with a different shape to the embedding dimension:

A diagram explaining how a transformer works. A series of sine waves are added as a positional embedding to the data before it goes in.

In this fashion, transformers are only limited by memory requirements and computational expense with respect to sequence lengths, which has been exploited in some model designs to convince a transformer-style model to learn to handle some significantly long sequences.


This is just a little look at sequenced data modelling with LSTMs and Transformers. I seem to have my ordering a bit backwards here, so in the next few posts we'll get down to basics and explain some concepts like Tensors and shapes, loss functions and backpropagation, attention, and how AI models are put together.

Are there any AI-related concepts or questions you would like answering? Leave a comment below and I'll write another post in this series to answer your question.

Sources and further reading

PhD Update 18: The end and the beginning

Hello! It has been a while. Things have been most certainly happening, and I'm sorry I haven't had the energy to update my blog here as often as I'd like. Most notably, I submitted my thesis last week (gasp!)! This does not mean the end of this series though - see below.

Before we continue, here's our traditional list of past posts:

Since last time, that detecting persuasive tactic challenge has ended too, and we have a paper going through at the moment: BDA at SemEval-2024 Task 4: Detection of Persuasion in Memes Across Languages with Ensemble Learning and External Knowledge.

Theeeeeeeeeeeeesis

Hi! A wild thesis appeared! Final counts are 35,417 words, 443 separate sources, 167 pages, and 50 pages of bibliography - making that 217 pages in total. No wonder it took so long to write! I submitted at 2:35pm BST on Friday 10th May 2024.

I. can. finally. rest.

It has been such a long process, and taken a lot of energy to complete it, especially since large amounts of formal academic writing isn't usually my thing. I would like to extend a heartfelt thanks especially to my supervisor for being there from beginning to end and beyond to support me through this endeavour - and everyone else who has helped out in one way or another (you know who you are).

Next step is the viva, which will be some time in July. I know who my examiners are going to be, but I'm unsure whether it would be wise to say here. Between now and then, I want to stalk investigate my examiners' research histories, which should give me an insight into their perspective on my research.

Once the viva is done, I expect to have a bunch of corrections to do. Once those are completed, I will to the best of my ability be releasing my thesis for all to read for free. I still need to talk to people to figure out how to do that, but rest assured that if you can't get enough of my research via the papers I've written for some reason, then my thesis will not be far behind.

Coming to the end of my PhD and submitting my thesis has been surprisingly emotionally demanding, so I thank everyone who is still here for sticking around and being patient as I navigate these unfamiliar events.

Researchy things

While my PhD may be coming to a close (I still can't believe this is happening), I have confirmed that I will have dedicated time for research-related activities. Yay!

This means, of course, that as one ending draws near, a new beginning is also starting. Today's task after writing this post is to readificate around my chosen idea to figure out where there's a gap in existing research for me to make a meaningful contribution. In a very real way, it's almost like I am searching for directions as I did in my very first post in this series.

My idea is connected to the social media research that I did previously on multimodal natural language processing of flooding tweets and images with respect to sentiment analysis (it sounded better in my head).

Specifically, I think I can do better than just sentiment analysis. Imagine an image of a street that's partially underwater. Is there a rescue team on a boat rescuing someone? What about the person on the roof waving for help? Perhaps it's a bridge that's about to be swept away, or a tree that has fallen down? Can we both identify these things in images and map them to physical locations?

Existing approaches to e.g. detect where the water is in the image are prone to misidentifying water that is infact where it should be for once, such as in rivers and lakes. To this end, I propose looking for the people and things in the water rather than the water itself and go for a people-centred approach to flood information management.

I imagine that while I'll probably use data from social media I already have (getting a hold of new data from social media is very difficult at the moment) - filtered for memes and misinformation this time - if you know of any relevant sources of data or datasets, I'm absolutely interested and please get in touch. It would be helpful but not required if it's related to a specific natural disaster event (I'm currently looking at floods, branching out to others is absolutely possible and on the cards but I will need to submit a new ethics form for that before touching any data).

Another challenge I anticipate is that of unlabelled data. It is often the case that large volumes of data are generated during an unfolding natural disaster, and processing it all can be a challenge. To this end, somehow I want my approach here to make sense of unlabelled images. Of course, generalist foundational models like CLIP are great, but lack the ability to be specific and accurate enough with natural disaster images.

I also intend that this idea would be applicable to images from a range of sources, and not just with respect to social media. I don't know what those sources could be just yet, but if you have some ideas, please let me know.

Finally, I am particularly interested if you or someone you know are in any way involved in natural disaster management. What kinds of challenges do you face? Would this be in any way useful? Please do get in touch either in the comments below or sending me an email (my email address is on the homepage of this website).

Persuasive tactics challenge

The research group I'm part of were successful in completing the SemEval Task 4: Multilingual Detection of Persuasion Techniques in Memes! I implemented the 'late fusion engine', which is a fancy name for an algorithm that uses in basic probability to combine categorical predictions from multiple different models depending on how accurate each model was on a per-category basis.

I'm unsure of the status of the paper, but I think it's been through peer-review so you can find that here: BDA at SemEval-2024 Task 4: Detection of Persuasion in Memes Across Languages with Ensemble Learning and External Knowledge.

I wasn't the lead on that challenge, but I believe the lead person (a friend of mine, if you are reading this and want me to link to somewhere here get in touch) on that project will be going to mexico to present it.

Teaching

I'm still not sure what I can say and what I can't, but starting in september I have been asked to teach a module on basic system administration skills. It's a rather daunting prospect, but I have a bunch of people much more experienced than me to guide me through the process. At the moment the plan is for 21 lecture-ish things, 9 labs, and the assessment stuff, so I'm rather nervous about preparing all of this content.

Of course, as a disclaimer nothing written in this section should be taken as absolute. (Hopefully) more information at some point, though unfortunately I doubt that I would be allowed to share the content created given it's University course material.

As always though, if there's a specific topic that lies anywhere within my expertise that you'd like explaining, I'm happy to write a blog post about it (in my own time, of course).

Conclusion

We've taken a little look at what is been going on since I last posted, and while this post has been rather talky (will try for some kewl graphics next time!), nonetheless I hope this has been an interesting read. I've submitted my thesis, started initial readificating for my next research project - which we've explored the ideas here, helped out a group research challenge project thingy, and been invited to do some teaching!

Hopefully the next post in this series will come out on time - long-term the plan is to absolutely continue blogging about the research I'm doing.

Until next time, the journey continues!

(Oh yeah! and finally finally, to the person who asked a question by email about this old post (I think?), I'm sorry for the delay and I'll try to get back to you soon.)

.desktop files: Launcher icons on Linux

Heya! Just thought I'd write a quick reminder post for myself on the topic of .desktop files. In most Linux distributions, launcher icons for things are dictated by files with the file extension .desktop.

Of course, most programs these days come with a .desktop file automatically, but if you for example download an AppImage, then you might not get an auto-generated one. You might also be packaging something for your distro's package manager (go you!) - something I do semi-regularly when apt repos for software I need isn't updated (see my apt repository!).

They can live either locally to your user account (~/.local/share/applications) or globally (/usr/share/applications), and they follow the XDG desktop entry specification (see also the Arch Linux docs page, which is fabulous as usual ✨). It's basically a fancy .ini file:

[Desktop Entry]
Encoding=UTF-8
Type=Application
Name=Krita
Comment=Krita: Professional painting and digital art creation program
Version=1.0
Terminal=false
Exec=/usr/local/bin/krita
Icon=/usr/share/icons/krita.png

Basically, leave the first line, the Type, the Version, the Terminal, and the Encoding directives alone, but the others you'll want to customise:

  • Name: The name of the application to be displayed in the launcher
  • Comment: The short 1-line description. Some distros display this as the tooltip on hover, others display it in other ways.
  • Exec: Path to the binary to execute. Prepend with env Foo=bar etc if you need to set some environment variables (e.g. running on a discrete card - 27K views? wow O.o)
  • Icon: Path to the icon to display as the launcher icon. For global .desktop files, this should be located somewhere in /usr/share/icon.

This is just the basics. There are many other directives you can include - like the Category directive, which describes - if your launcher supports categories - what categories a given launch icon should appear under.

Troubleshooting: It took me waaay too long to realise this, but if you have put your .desktop file in the right place and it isn't appearing - even after a relog - then the desktop-file-validate command could come in handy:

desktop-file-validate path/to/file.desktop

It validates the content of a given .desktop file. If it contains any errors, then it will complain about them for you - unlike your desktop environment which just ignores .desktop files that are invalid.

If you found this useful, please do leave a comment below about what you're creating launcher icons for!

Sources and further reading

Defining AI: Image segmentation

Banner showing the text 'Defining AI' on top on translucent white vertical stripes against a voronoi diagram in white against a pink/purple background. 3 progressively larger circles are present on the right-hand side.

Welcome back to Defining AI! This series is all about defining various AI-related terms in short-and-sweet blog posts.

In the last post, we took a quick look at word embeddings, which is the key behind how AI models understand text. In this one, we're going to investigate image segmentation.

Image segmentation is a particular way of framing a learning task for an AI model that takes an image as an input, and then instead of classifying the image into a given set of categories, it classifies every pixel by the category each one belongs to.

The simplest form of this is what's called semantic segmentation, where we classify each pixel via a given set of categories - e.g. building, car, sky, road, etc if we were implementing a segmentation model for some automated vehicle.

If you've been following my PhD journey for a while, you'll know that it's not just images that can be framed as an 'image' segmentation task: any data that is 2D (or can be convinced to pretend to be 2D) can be framed as an image segmentation task.

The output of an image segmentation model is basically a 3D map. This map will obviously have the width and height, but then also have an extra dimension for the channel. This is best explained with an image:

(Above: a diagram explaining how the output of an image segmentation model is formatted. See below explanation. Extracted from my work-in-progress thesis!)

Essentially, each value in the 'channel' dimension will be the probability that pixel is that class. So, for example, a single pixel in a model for predicting the background and foreground of an image might look like this:

[ 0.3, 0.7 ]

....if we consider these classes:

[ background, foreground ]

....then this pixel has a 30% change of being a background pixel, and a 70% change of being a foreground pixel - so we'd likely assume it's a foreground pixel.

Built up over the course of an entire image, you end up with a classification of every pixel. This could lead to models that separate the foreground and background in live video feeds, autonomous navigation systems, defect detection in industrial processes, and more.

Some models, such as the Segment Anything Model (website) have even been trained to generically segment any input image, as in the above image where we have a snail sat on top of a frog sat on top of a turtle, which is swimming in the water.

As alluded to earlier, you can also feed in other forms of 2D data. For example, this paper) predicts rainfall radar data a given number of hours into the future from a sample at the present moment. Or, my own research approximates the function of a physics-based model!


That's it for this post. If you've got something you'd like defining, please do leave a comment below!

I'm not sure what it'll be next, but it might be either staying at a high-level and looking at different ways that we can frame tasks in AI models, or I could jump to a lower level to look at fundamentals like loss (error) functions, backpropagation, layers (AI models are made up of multiple smaller layers), etc. What would you like to see next?

Defining AI: Word embeddings

Hey there! It's been a while. After writing my thesis for the better part of a year, I've been getting a bit burnt out on writing - so unfortunately I had to take a break from writing for this blog. My thesis is almost complete though - more on this in the next post in the PhD update blog post series. Other higher-effort posts are coming (including the belated 2nd post on NLDL-2024), but in the meantime I thought I'd start a series on defining various AI-related concepts. Each post is intended to be relatively short in length to make them easier to write.

Normal scheduling will resume soon :-)

Banner showing the text 'Defining AI' on top on transclucent white vertical stripes against a voronoi diagram in white against a pink/purple background. 3 progressively larger circles are present on the right-hand side.

As you can tell by the title of this blog post, the topic for today is word embeddings.

AI models operate fundamentally on numerical values and mathematics - so naturally to process text one has to encode said text into a numerical format before it can be shoved through any kind of model.

This process of converting text to a numerically-encoded value is called word embedding!

As you might expect, there are many ways of doing this. Often, this involves looking at what other words a given word often appears next to. This could be for example framed as a task in which a model has to predict a word given the words immediately before and after it in a sentence, and then take the output of the last layer before the output layer as the embedding (word2vec).

Other models use matrix math to calculate this instead, producing a dictionary file as an output (GloVe | paper). Still others use large models that are trained to predict randomly masked words, and process entire sentences at once (BERT and friends) - though these are computationally expensive since every bit of text you want to embed has to get pushed through the model.

Then there's contrastive learning approaches. More on contrastive learning later in the series if anyone's interested, but essentially it learns by comparing pairs of things. This can lead to a higher-level representation of the input text, which can increase performance in some circumstances, and other fascinating side effects that I won't go into in this post. Chief among these is CLIP (blog post).

The idea here is that semantically similar words wind up having similar sorts of numbers in their numerical representation (we call this a vector). This is best illustrated with a diagram:

Words embedded with GloVe and displayed in a heatmap

I used GloVe to embed some words with GloVe (really easy to use since it's literally just a dictionary), and then used cosine distance to compute the similarity between the different words. Once done, I plotted this in a heatmap.

As you can see, rain and water are quite similar (1 = identical; 0 = completely different), but rain and unrelated are not really alike at all.


That's about the long and short of word embeddings. As always with these things, you can go into an enormous amount of detail, but I have to cut it off somewhere.

Are there any AI-related concepts or questions you would like answering? Leave a comment below and I'll write another post in this series to answer your question.

NLDL was awesome! >> NLDL-2024 writeup

A cool night sky and northern lights banner I made in Inkscape. It features mountains and AI-shaped constellations, with my logo and the text "@ NLDL 2024".

It's the week after I attended the Northern Lights Deep Learning Conference 2024, and now that I've had time to start to process everything I leant and experienced, it's blog post time~~✨ :D

Edit: Wow, this post took a lot more effort to put together than I expected! It's now the beginning of February and I'm just finishing this post up - sorry about the wait! I think this is the longest blog post to date. Consider this a mega-post :D

In this post that is likely to be quite long I'm going to talk a bit about the trip itself, and about what happened, and the key things that I learnt. Bear in mind that this post is written while I've still sorting my thoughts out - it's likely going to take many months to fully dive into everything I saw that interested me :P

Given I'm still working my way through everything I've learnt, it is likely that I've made some errors here. Don't take my word for any facts written here please! If you're a researcher I'm quoting here and you have spotted a mistake, please let me know.

I have lots of images of slides and posters that interested me, but they don't make a very good collage! To this end images shown below are from the things I saw & experienced. Images of slides & posters etc available upon request.

Note: All paper links will be updated to DOIs when the DOIs are released. All papers have been peer-reviewed.

Day 1

A collage of images from travelling to the conference. Description below. (Above: A collage of images from the trip travelling to the conference. Description below.)

Images starting top-left, going anticlockwise:

  1. The moon & venus set against the first.... and last sunrise I would see for the next week
  2. Getting off the first plane in Amsterdam Schiphol airport
  3. Flying into Bergen airport
  4. The teeny Widerøe turboprop aircraft that took me from Bergen to Tromsø
  5. A view from the airport window when collecting my bags
  6. Walking to the hotel
  7. Departures board in Bergen airport

After 3 very long flights the day before (the views were spectacular, but they left me exhausted), the first day of the conference finally arrived. As I negotiated public transport to get myself to UiT, The Arctic University of Norway I wasn't sure what to expect, but as it turned out academic conferences are held in (large) lecture theatres (at least this one was) with a variety of different events in sequence:

  • Opening/closing addresses: Usually at the beginning & ends of a conference. Particularly the beginning address can include useful practical information such as where and when food will be made available.
  • Keynote: A longer (usually ~45 minute) talk that sets the theme for the day or morning/afternoon. Often done by famous and/or accomplished researchers.
  • Oral Session: A session chaired by a person [usually relatively distinguished] in which multiple talks are given by individual researchers with 20 minutes per talk. Each talk is 15 minutes with 5 minutes for questions. I spoke in one of these!
  • Poster session: Posters are put up by researchers in a designated area (this time just outside the room) and people can walk around and chat with researchers about their researchers. If talks have a wide reach and shallow depth, posters have a narrow reach and much more depth.
    • I have bunch of photographs of posters that interested me for 1 reason or another, but it will take me quite a while to work through them all to properly dig into them all and extract the interesting bits.
  • Panel discussion: This was where a number of distinguished people sit down on chairs/stools at the front, and a chair asks a series of questions and moderates the resulting discussion. Questions from the audience may also be allowed after some of the key preset questions have been asked.

This particular conference didn't have multiple events happening at once (called 'tracks' in some conferences I think), which I found very helpful as I didn't need to figure out which events I should attend or not. Some talks didn't sound very interesting but then turned out to be some of the highlights of the conference for me, as I'll discuss below. Definitely a fan of this format!

The talks started off looking at the fundamentals of AI. Naturally, this included a bunch of complex mathematics - the understanding of which in real-time is not my strong point - so while I did make some notes on these I need to go back and take a gander at the papers of some of the talks to fully grasp what was going on.

Moving on from AI fundamentals, the next topic was reinforcement learning. While not my current area of research area, some interesting new uses of the technology were discussed, such as dynamic pathing/navigation based on the information gained from onboard sensors by Alouette von Hove from the University of Oslo - the presented example was determining the locations of emitters of greenhouse gasses such as CO₂ & methane.

Interspersed in-between the oral sessions were poster sessions. At NLDL-2024 these were held in the afternoons and also had fruit served alongside them, which I greatly appreciated (I'm a huge fan of fruit). At these there were posters for the people who had presented earlier in the day, but also some additional posters from researchers who were presenting a talk.

If talks research a wide audience at a shallow depth, the posters reached a narrower audience but at a much greater depth. I found the structure of having the talk before the poster very useful - not only for presenting my own research (more on that later), but also for picking out some of the posters I wanted to visit to learn more about their approaches.

On day 1, the standout poster for me was one on uncertainty quantification in image segmentation models - Confident Naturalness Explanation (CNE): A Framework to Explain and Assess Patterns Forming Naturalness. While their approach to increasing the explainability of image segmentation models (particularly along class borders) was applied to land use and habitat identification, I can totally see it being applicable to many other different projects in a generic 'uncertainty-aware image segmentation' form. I would very much like to look into this one deeply and consider applying lessons learnt to my rainfall radar model.

Another interesting poster worked to segment LiDAR data in a similar fashion to that of normal 'image segmentation' (that I'm abusing in my research) - Beyond Siamese KPConv: Early Encoding and Less Supervision.

Finally, an honourable mention is one which applied reinforcement learning to task scheduling - Scheduling conditional task graphs with deep reinforcement learning.

Diversity in AI

In the afternoon, the Diversity in AI event was held. The theme was fairness of AI models, and this event was hugely influential for me. Through a combination of cutting edge research and helpful case-studies and illustrations, the speakers revealed hidden sources of bias and novel ways to try and correct for them. They asked the question of "what do we mean by a fair AI model?", and discovered the multiple different facets to the question and how fairness in an AI model can mean different things in different contexts and to different people.

They also demonstrated how taking a naïve approach to correcting for e.g. bias in a binary classifier could actually make the problem worse!

I have not yet had time to go digging into this, but I absolutely want to spend at least an entire afternoon dedicated to digging into and reading around the subject. Previously, I had no idea how big and pervasive the problem of bias in AI was, so I most certainly want to educate myself to ensure models that I create as a consequence of the research I do are as ethical as possible.

Depending on how this research reading goes, I could write a dedicated blog post on it in the future. If this would be interesting to you, please comment below with the kinds of things you'd be interesting in.

Another facet of the diversity event was that of hiring practices and diversity in academia. In the discussion panel that closed out the day, the current dilemma of low diversity (e.g. gender balance) in students taking computer science as a subject. It was suggested that how computer science is portrayed can make a difference, and that people with different backgrounds on the subject will approach and teach the subject through different lenses. Mental health was also mentioned as being a factor that requires work and effort to reduce stigma, encourage discussions, and generally improve the situation.

All in all I found the diversity event to be a very useful and eye-opening event that I'm glad I attended.

A collage from day 1 of the conference

(Above: A collage from day 1 of the conference)

Images starting top-left, going anticlockwise in an inwards spiral:

  1. The conference theatre during a break
  2. Coats hung up on the back wall of the conference theatre - little cultural details stood out to me and were really cool!
  3. On the way in to the UiT campus on day 1
  4. Some plants under some artificial sunlight bulbs I found while having a wander
  5. Lunch on day 1: rice (+ curry, but I don't like curry)
  6. NLDL-2024 sign
  7. View from the top on Fjellheisen
  8. Cinnamon bun that was very nice and I need to find a recipe
  9. View from the cable car on the way up Fjellheisen

Social 1: Fjellheisen

The first social after the talks closed out for the day was that of the local mountain Fjellheisen (pronounced fyell-hai-sen as far as I can tell). Thankfully a cable car was available to take conference attendees (myself included) up the mountain, as it was significantly cold and snowy - especially 420m above sea level at the top. Although it was very cloudy at the time with a stratus cloud base around 300m (perhaps even lower than that), we still got some fabulous views of Tromsø and the surrounding area.

There was an indoor seating area too, in which I warmed up with a cinnamon bun and had some great conversations with some of the other conference attendees. Social events and ad-hoc discussions are, I have discovered, an integral part of the conference experience. You get to meet so many interesting people and discover so many new things that you wouldn't otherwise get the chance to explore.

Day 2

Day 2 started with AI for medical applications, and what seemed to be an unofficial secondary theme continuing the discussion of bias and fairness which made the talks just as interesting and fascinating as the previous day. By this point I figured out the conference-provided bus, resulting in more cool discussions on the way to and from the conference venue at UiT.

Every talk was interesting in it's own way, with discussions of shortcut learning (where a model learns to recognise something else other than your intended target - e.g. that some medical device in an X-Ray is an indicator of some condition when it wouldn't ordinarily present at test time), techniques to utilise contrastive learning in new ways (classifying areas of interest in very large images from microscopes) and applying the previous discussion of bias and fairness to understanding bias in contrastive learning systems, and what we can do about it through framing the task the model is presented with.

The research project that stood out to be was entitled Local gamma augmentation for ischemic stroke lesion segmentation on MRI by Jon Middleton at the University of Copenhagen. Essentially they correct for differing ranges of brightness in images from MRI scans of brains before training a model to increase accuracy and reduce bias.

The poster session again had some amazing projects that are worth mentioning. Of course, as with this entire blog post this is just my own personal recount of the things that I found interesting - I encourage you to go to a conference in person at some point if you can!

The highlight was a poster entitled LiDAR-based Norwegian Tree Species Detection Using Deep Learning. The authors segment LiDAR data by tree species, but have also invented a clever augmentation technique they call 'cowmix augmentation' to stretch the model's attention to detail on class borders and the diversity of their dataset.

Another cool poster was Automatic segmentation of ice floes in SAR images for floe size distribution. By training an autoencoder to reconstruct SAR (Synthetic Aperture Radar) images, they use the resulting output to analyse the distribution in sizes of icebergs in Antarctica.

I found that NLDL-2024 had quite a number of people working in various aspects of computer vision and image segmentation as you can probably tell from the research projects that have stood out to me so far. Given I went to present my rainfall radar data (more on that later), image handling projects stood out to me more easily than others. There seemed to be less of a focus on Natural Language Processing - which, although discussed at points, wasn't nearly as prominent a theme.

One NLP project that was a thing though was a talk on anonymising data in medical records before they are e.g. used in research projects. The researcher presented an approach using a generative text model to identify personal information in medical records. By combining it with a regular expression system, more personal information could be identified and removed than before.

While I'm not immediately working with textual data at the minute, part of my PhD does involve natural language processing. Maybe in the future when I have some more NLP-based research to present it might be nice to attend an NLP-focused conference too.

A collage of photos from day 2 of the conference.

(Above: A collage of photos from day 2 of the conference.)

Images from top-left in an anticlockwise inwards spiral:

  1. Fjellheisen from the hotel at which the conference dinner took place
  2. A cool church building in a square I walked through to get to the bus
  3. The hallway from the conference area up to the plants in the day 1 collage
  4. Books in the store on the left of #3. I got postcards here!
  5. Everyone walking down towards the front of the conference theatre to have the group photo taken. I hope they release that photo publicly! I want a copy so bad...
  6. The lobby of the conference dinner hotel. It's easily the fanciest place I have ever seen....!
  7. The northern lights!!! The clouds parted for half and hour and it was such a magical experience.
  8. Moar northern lights of awesomeness
  9. They served fruit during the afternoon poster sessions! I am a big fan. I wonder if the University of Hull could do this in their events?
  10. Lunch on day 2: fish pie. It was very nice!

Social 2: Conference dinner

The second social event that was arranged was a conference dinner. It was again nice to have a chance to chat with others in my field in a smaller, more focused setting - each table had about 7 people sitting at it. The food served was also very fancy - nothing like what I'm used to eating on a day-to-day basis.

The thing I will always remember though is shortly before the final address, someone came running back into the conference dinner hall to tell us they'd seen the northern lights!

Grabbing my coat and rushing out the door to some bewildered looks, I looked up and.... there they were.

As if they had always been there.

I saw the northern lights!

Seeing them has always been something I wanted to do, so I am so happy to have a chance to see them. The rest of the time it was cloudy, but the clouds parted for half an hour that evening and it was such a magical moment.

They were simultaneously exactly and nothing like what I expected. They danced around the sky a lot, so you really had to have a very good view in all directions and keep scanning the sky. They also moved much faster than I expected. They could flash and be gone in just moments, while others would just stick and hang around seemingly doing nothing for over a minute.

A technique I found to be helpful was to scan the sky with my phone's camera. It could see 'more' of the northern lights than you can see with your eyes, so you could find a spot in the sky that had a faint green glow with your phone and then stare at it - and more often than not it would brighten up quickly so you could see it with your own eyes.

Day 3

It felt like the conference went by at lightning speed. For the entire time I was focused on learning and experiencing as much as I could, and just as soon as the conference started we all reached the last day.

The theme for the third day started with self-supervised learning. As I'm increasingly discovering, self-supervised learning is all about framing the learning task you give an AI model in a clever way that partially or completely does away with the need for traditional labels. There were certainly some clever solutions on show at NLDL-2024:

An honourable mention goes to the paper on a new speech-editing model called FastStitch. Unfortunately the primary researcher was not able to attend - a shame, cause it would have been cool to meet up and chat - but their approach looks useful for correcting speech in films, anime, etc after filming... even though it could also be used for some much more nefarious purposes too (though that goes for all of the new next-gen generative AI models coming out at the moment).

This was also the day I presented my research! As I write this, I realise that this post is now significantly long so I will dedicate a separate post to my experiences presenting my research. Suffice to say it was a very useful experience - both from the talks and the poster sessions.

Speaking of poster sessions, there was a really interesting poster today entitled Deep Perceptual Similarity is Adaptable to Ambiguous Contexts which proposes that image similarity is more complicated just comparing pixels: it's about shapes and the object shown -- not just the style a given image is drawn in. To this end, they use some kind of contrastive approach to compare how similar a series of augmentations are to the original source image as a training task.

Panel Discussion

Before the final poster session of the (main) conference, a panel discussion between 6 academics (1 chairing) who sounded very distinguished (sadly I did not completely catch their names, and I will save everyone the embarrassment of the nickname I had to assign them to keep track of the discussion in my notes) closed out the talks. There was no set theme that jumped out to me (other than AI of course), but like the diversity in AI conference discussion panel on day 1 the chair had some set questions to ask the academics making up the discussion.

The role of Universities and academics was discussed at some length. Recently large tech companies like OpenAI, Google, and others are driven by profit to race to put next-generation foundational models (a term new to me that describes large general models like GPT, Stable Diffusion, Segment Anything, CLIP, etc) to work in anything and everything they can get their hands on..... and often to the detriment of user privacy.

It was mentioned that researchers in academia have a unique freedom to choose what they research in a way that those working in industry do not. It was suggested that academia must be one step ahead of industry, and understand the strengths/weaknesses of the new technologies -- such as foundational models, and how they impact society. With this freedom, researchers in academia can ask the how and why, which industry can't spare the resources for.

The weaknesses of academia was also touched on, in that academia is very project-based - and funding for long-term initiatives can be very difficult come by. It was also mentioned that academia can get stuck on optimising e.g. a benchmark in the field of AI specifically. To this end, I would guess creativity is really important to invent innovative new ways of solving existing real-world problems rather than focusing too much on abstract benchmarks.

The topic of the risks of AI in the future also came up. While the currently-scifi concept of the Artificial General Intelligence (AGI) that is smarter than humans is a hot topic at the moment, whether or not it's actually possible is not clear (personally, it seem rather questionable that it's even possible at all) - and certainly not in the next few decades. Rather than worrying about AGI, everyone agreed that bias and unfairness in AI models is already a problem that needs to be urgently addressed.

The panel agreed that people believing the media hype generated by the large tech companies is arguably more dangerous than AGI itself... even if it were right around the corner.

The EU AI Act is right around the corner, which requires transparency of data used to train a given AI model, among many other things. This is a positive step forwards, but the panel was concerned that the act could lead to companies cutting corners on safety to tick boxes. They were also concerned that an industry would spring up around the act providing services of helping other businesses to comply with the act, which risked raising the barrier to entry significantly. How the act is actually implemented with have a large effect on its effectiveness.

While the act risks e.g. ChatGPT being forced to pull out of the EU if it does not comply with the transparency rules, the panel agreed that we must take alternate path than that of closed-source models. Open source alternatives to e.g. ChatGPT do exist and are only about 1.5 years behind the current state of the art. It appears at first that privacy and openness are at odds, but in Europe we need both.

The panel was asked what advice they had for young early-career researchers (like me!) in the audience, and had a range of helpful tips:

  • Don't just follow trends because everyone else is. You might see something different in a neglected area, and that's important too!
  • Multidisciplinary research is a great way to see different perspectives.
  • Speak to real people on the ground as to what their problems are, and use that as inspiration
  • Don't keep chasing after the 'next big thing' (although keeping up to date in your field is important, I think).
  • Work on the projects you want to work on - academia affords a unique freedom to researchers working within

All in all, the panel was a fascinating big-picture discussion, and there was discussion of the bigger picture of the role of academia in big-picture current global issues I haven't really seen before this point.

AI in Industry Event

The last event of the conference came around much faster than I expected - I suppose spending every waking moment focused on conferencey things will make time fly by! This event was run by 4 different people from 4 different companies involved in AI in one way or another.

It was immediately obvious that these talks were by industry professionals rather than researchers, since they somehow managed say a lot without revealing all that much about the internals of their companies. It was also interesting that some of them were almost a pitch to the researchers present to ask if they had any ideas or solutions to their problems.

This is not to say that the talks weren't useful. They were a useful insight into how industry works, and how the impact of research can be multiplied by being applied in an industry context.

It was especially interesting to listen to the discussion panel that was held between the 4 presenters / organisers of the industry event. 1 of them served as chair, moderating the discussion and asking the questions to direct the discussion. They discussed issues like silos of knowledge in industry vs academia, the importance of sharing knowledge between the 2 disciplines, and the challenges of AI explainability in practice. The panellists had valuable insights into the realities of implementing research outputs on the ground, the importance of communication, and some advice for PhD students in the audience considering a move into industry after their PhD.

A collage of photos I took during day 3

(Above: A collage from day 1 of the conference)

Images starting top-left, going anticlockwise in an inwards spiral:

  • A cool ship I saw on the way to the bus that morning
  • A neat building I saw on the way to the bus. The building design is just so different to what I'm used to.... it gives me loads of building inspiration for Minetest!
  • The cafeteria in which we ate lunch each day! It was so well designed, and the self-clear system was very functional and cool!
  • The conference theatre during one of the talks.
  • Day 3's lunch: lasagna! They do cool food in UiT in Tromsø, Norway!
  • The last meal I ate (I don't count breakfast :P) in the evening before leaving the following day
  • The library building I went past on the way back to the hotel in the evening. The integrated design with books + tables and interactive areas is just so cool - we don't have anything like that I know of over here!
  • A postbox! I almost didn't find it, but I'm glad I was able to sent a postcard or two.
  • The final closing address! It was a bittersweet moment that the conference was already coming to a close.

Closing thoughts

Once the industry event was wrapped up, it was time for the closing address. Just as soon as it started, the conference was over! I felt a strange mix of exhaustion, disbelief that the conference was already over, and sadness that everyone would be going their separate ways.

The very first thing I did after eating something and getting back to my hotel was collapse in bed and sleep until some horribly early hour in the morning (~4-5am, anyone?) when I needed to catch the flight home.

Overall it was an amazing conference, and I've learnt so much! It's felt so magical, like anything is possible ✨ I've met so many cool people and been introduced to so many interesting ideas, it's gonna take some time to process them all.

I apologise for how long and rambly this post has turned out to be! I wanted to get all my thoughts down in something coherent enough I can refer to it in the future. This conference has changed my outlook on AI and academia, and I'm hugely grateful to my institution for finding the money to make it possible for me to go.

It feels impossible to summarise the entire conference in 4 bullet points, but here goes:

  • Fairness: What do we mean by 'fair'? Hidden biases, etc. Explainable AI sounds like an easy solution but can not only mislead you but attempts to improve perceived 'fairness' can in actuality make the problem worse and you would never know!
  • Self-supervised learning: Clustering, contrastive learning, also tying in with the fairness theme ref sample weighting and other techniques etc.
  • Fundamental models: Large language models etc that are very large and expensive to train, sometimes available pretrained sand sometimes only as an API. They can zero-shot many different tasks, but what about fairness, bias, ethics?
  • Reinforcement learning: ...and it's applications

Advice

I'm going to end this mammoth post with some advice to prospective first-time conference goers. I'm still rather inexperienced with these sortsa things, but I do have a few things I've picked up.

If you've unsure about going to a conference, I can thoroughly recommend attending one. If you don't know which conference you'd like to attend, I recommend seeking someone with more experience than you in your field but what I can say is that I really appreciated how NLDL-2024 was not too big and not too small. It had an estimated 250 conference attendees, and I'm very thankful it did not have multiple tracks - this way I didn't hafta sort through which talks interested me and which ones didn't. The talks that did interest me sometimes surprised me: if I had the choice I would have picked an alternative, but in the end I'm glad I sat through all of them.

Next, speak to people! You're all in this together. Speak to people at lunch. On the bus/train/whatever. Anywhere and everywhere you see someone with the same conference lanyard as you, strike up a conversation! The other conference attendees have likely worked just as hard as you to get here & will likely be totally willing to you. You'll meet all sorts of new people who are just as passionate about your field as you are, which is an amazing experience.

TAKE NOTES. I used Obsidian for this purpose, but use anything that works for you. This includes both formally from talks, panel discussions, and other formal events, but also informally during chats and discussions you hold with other conference attendees. Don't forget to include who you spoke to as well! I'm bad at names and faces, but your notes will serve as a permanent record of the things you learnt and experienced at the time that you can refer back to again later. You aren't going to remember everything you see (unless you have a perfect photographic memory of course), so make notes!

On the subject of recording your experiences, take photos too. I'm now finding it very useful to have photos of important slides and posters that I saw to refer back to. I later developed a habit of photographing the first slide of every talk, which has also proved to be invaluable.

Having business cards to hand out can be extremely useful to follow up conversations. If you have time, get some made before you go and take them with you. I included some pretty graphics from my research on mine, which served as useful talking points to get conversations started.

Finally, have fun! You've worked hard to be here. Enjoy it!

If you have any thoughts / comments about my experiences at NLDL-2024, I'd love to hear them! Do leave a comment below.

LaTeX templates for writing with the University of Hull's referencing style

Hello, 2024! I'm writing this while it is still some time before the new year, but I realised just now (a few weeks ago for you), that I never blogged about the LaTeX templates I have been maintaining for a few years by now.

It's no secret that I do all of my formal academic writing in LaTeX - a typesetting language that is the industry standard in the field of Computer Science (and others too, I gather). While it's a very flexible (and at times obtuse, but this is a tale for another time) language, actually getting started is a pain. To make this process easier, I have developed over the years a pair of templates for writing that make starting off much easier.

A key issue (and skill) in academic writing is properly referencing things, and most places have their own specific referencing style you have to follow. The University of Hull is no different, so I knew from the very beginning that I needed a solution.

I can't remember who I received it from, but someone (comment below if you remember who it was, and I'll properly credit!) gave me a .bst BibTeX referencing style file that matches the University of Hull's referencing style.

I've been using it ever since, and I have also applied a few patches to it for some edge cases I have encountered that it doesn't handle. I do plan on keeping it up to date for the forseeable future with any changes they make to the aforementioned referencing style.

My templates also include this .bst file to serve as a complete starting template. There's one with a full page title (e.g. for thesis, dissertations, etc), and another with just a heading that sits at the top of the document just like a paper you might find on Semantic Scholar.

Note that I do not guarantee that the referencing style matches the University of Hull's style. All I can say is that it works for me and implements this specific referencing style.

With that in mind, I'll leave the README of the git repository to explain the specifics of how to get started with them:

https://git.starbeamrainbowlabs.com/Demos/latex-templates

They are stored on my personal git server, but you should be able to clne them just fine. Patches are most welcome via email (check the homepage of my website!)!

Happy Christmas / New Year 2023!

Heya!

I hope everyone has been having a restful winter break. This is just a quick post to wish you a happy Christmas and a great new year!

This year has been a very busy one for me - primarily with going part-time and focusing on writing my PhD thesis. I've still done a bunch of cool things this year though. Here are some of my highlights:

I'll be back in the new year with a bunch of cool things!

Looking ahead

The first event on my list is the Northern Lights Deep Learning Conference 2024, so expect to see coverage of that. Not sure how much time I'll have, but I'll definitely post on my Fediverse(/Mastodon) account: https://fediscience.org/@sbrl and write up later if I don't get a chance at the time.

Speaking of social media, my aforementioned account on the fediverse is my primary account. I post things there that do not make it to Twitter (especially given my automatic reposter has broken since API access has been restricted), so it's worth following me over there.

I will continue making Twitter/X → Fediscience redirect posts on Twitter, but as previously mentioned no Twitter/X API access means reposting content takes not-insignificant effort.

I'm also working with some PhD friends in my research group on SemEval 2024 Task 4. My role on this is identifying which word embedding system to use is best - and since this ties into my currently-partially-written blog post about the research-side of word embeddings, I think I'm ultimately going to wait until I've crunched some numbers for this project before redrafting that post with some fancier visuals.

Looking further ahead, I'm hoping that 2024 will be the year I finally finish my PhD!

Final thoughts

As always, there's likely a lot I've been doing that I have forgotten to blog about here. Please do comment below if there's anything that you'd like to see on here that I haven't posted about / have posted about but it's confusing. I'll do my best to blog about it :D

See you in 2024!

A bauble from a friend's Christmas tree.
Art by Mythdael