____________________________________ This is just a nice random number!
Welcome to the home of Shifoo (previously That Computer Scientist). My name is @bobby, and this is my personal
website. I aim to build a retro looking personal website, where I share my thoughts, ideas, and experiences through articles, and will showcase some cool nostalgic features and tools.
Please note that I am continuously working on this site, and it is still under construction. So, not all features are available yet, and some features may not work as intended.
There's also a some of fun stuff you can find in the sidebar, that you can play around with. I will be adding more in the not so distant future.
Also, To participate around various sections of the site, you will need to register for an account. I hope you enjoy your stay here.
Recently, I was trying to build a "multi-screen" or "multi-view" application in Go using the Bubble Tea library. Now, for those who don't know, Bubble Tea is a framework for building terminal-based UIs/applications which is based on The Elm Architecture. Not only this, Bubble Tea also ships with a lot of "sister" libraries which can be used to further enhance your experience of building CLI or TUI applications with Bubble Tea. Bubbles and Lip Gloss are two such libraries which are built on top of Bubble Tea and which I am also using through this article.
Now, back to the basic problem. All I wanted to do was to switch between two (or potentially more) screens. So, let's start by building two example screens — one with a spinner and the other with a simple text message. But before we do that, let's first see how The Elm Architecture works. Right here, you can see picture of the Elm Architecture which I stole from the official Elm guide. The Elm Architecture is based on three main components:
One of the most important features of the Parsing Expression Grammar formalism is that it is packrat parsable. This means that it can be parsed in linear time using a technique called memoization. This technique is also known as tabling in the logic programming community. The basic idea of a Parsing Expression Grammar or PEG is that you have a DSL or a domain specific language which looks exactly like a BNF, except that it is a program and its a parser. Now, these PEGs can be ambiguous and are capable of backtracking. Packrat parsers make the backtracking efficient with caching. How this works, is that the parser remembers the results of all the sub-parsers it has already run, and if it encounters the same sub-parser again, it just returns the result from the previous run. This behaviour makes packrat parsing is a very powerful technique.
When you design a language you typically want to formalize the syntax with a context-free grammar and then you feed it through a compiler and it'll generate a table-driven, bottom-up parser. Then you may have to hack on the grammar until you get it right because in order for the parsers to be efficient they need to be able to look ahead just one symbol so that they know what choice to make as they typically don't support backtracking. Of course there are versions which support infinite backtracking, but that can make your parser very inefficient. So, usually what you try to do in order to get a very efficient parser is you try and turn it into a nice grammar and then you use the generated parser to do what you like.
Remember the days when everyone and their pet iguana was raving about Arch Linux? You couldn't escape the
ever-so-subtle "I use Arch BTW" remarks in every Linux forum. Well, move over, Arch, because NixOS is here to steal
your thunder! Nowadays, it seems that you can't browse YouTube or read a blog without stumbling upon someone
extolling the virtues of NixOS and how it is the epitome of computing perfection. But hey, who needs critical
analysis when we can jump on the hype train and declare NixOS as the new Arch? Because that's exactly what's going
on. NixOS has now become the self-proclaimed prodigy that's poised to dethrone Arch Linux as the holy grail of Linux
distributions. The time is calling, my friends! It's time for you – the seasoned Linux enthusiast – to dust off your
keyboard warrior capes and embark on a new crusade. So, grab your Tux plushie (or, your pitchforks if you belong to
the world of devils) and let's embark on an adventure through the enigmatic world of NixOS (and let the memes
commence)!
Guess who's back? Back again?... for the rest, go listen to the fucking song! I am not here to sing songs for you. Anyroad, what's up? Actually, no one cares... so, let's get started! Remember back in the day, when you visited your favourite forum and it would say something like "$N$ users online: $(n)$ members, $(x)$ guests" (of course $ {n, x} \in \mathbb{N} $)? This is called user presence. It's a simple way to show your users that they are not alone on your website. It's also a great way to show off your mad skills to your friends. So, let's get started!
Before we start, I would like to discuss your options to track user presence on your website. There are multiple ways to do this:
WebSockets: This is the most modern and probably the most efficient way to do this. WebSockets will provide you with a persistent, bidirectional communication channel between your server and the client. Talking in a broad perspective, you would set up a WebSocket server that listens for incoming connections and handles WebSocket events, and when a client establishes a WebSocket connection, register their presence by storing relevant data (e.g., user ID, session ID) on the server. Then you can implement mechanisms to track user activity, such as sending heartbeats or receiving client-initiated events.
Are you back or are you new here? If you're back, why'd you even come back? If you're new, you are in for hell.
People who are back, you know what's coming. People who are new, you don't know what's coming. But you will. You
will. Last time, we discussed "Why
Comparison-based Sorting Algorithms have $\Omega(n log n)$ Lower Bound" and this is a follow-up to that.
Also, if you haven't read that, go read the previous article first, then come back here and read this. Alright,
ready? Let's go.
Today, we will discuss average-case lower bounds for comparison-based sorting algorithms. Now, I don't expect your
little brain to remember everything you were spoon-fed last time, so I'll give you a quick recap. As expected, our
focus in this article, once again, is comparison-based sorting Algorithms. In our last article, we were able to
define a comparison-based sorting algorithm.
Also, we were able to prove that any comparison-based sorting algorithm must take $\Omega(n log n)$ time in the worst
case. And, we also defined a theorem for this proof. Do you remember what it was? Of course, you don't. Here's the
theorem: