<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[i don't understand]]></title><description><![CDATA[i don't understand]]></description><link>https://www.idontunderstand.it/</link><generator>Ghost 3.0</generator><lastBuildDate>Tue, 17 Mar 2026 14:30:00 GMT</lastBuildDate><atom:link href="https://www.idontunderstand.it/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[v0 vs Replit]]></title><description><![CDATA[<p>I wanted to compare V0 and Replit, which are both tools that you can use to create software in a browser IDE environment, via an AI chat bot, which will also deploy your code for you to a URL. This post compares my experience with each for the same task.</p>]]></description><link>https://www.idontunderstand.it/vercel-v0-vs-replit/</link><guid isPermaLink="false">69b6072e93dc9e04cdff48bc</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sun, 15 Mar 2026 11:32:13 GMT</pubDate><content:encoded><![CDATA[<p>I wanted to compare V0 and Replit, which are both tools that you can use to create software in a browser IDE environment, via an AI chat bot, which will also deploy your code for you to a URL. This post compares my experience with each for the same task. I used the desktop version of each, though both have mobile apps (v0 only for Apple, not Android).</p><h2 id="example-project">Example Project</h2><p>The prompt given to each was:</p><blockquote><em>create an app that uses the spotify api to get a song, when given a voice command to get a song of a faster or slower tempo than the current song</em></blockquote><p>The reason I used Spotify is that it does provide the tempo of a song, whereas SoundCloud for example does not - see my other <a href="https://idontunderstand.it/spotify-vs-soundcloud-apis/">post </a>comparing the two APIs.</p><p>I also kept the prompt pretty vague, so wanted to see how it would fill in the gaps - for example, was there an an initial song, and how would it get songs?</p><p>The process, after entering the prompt in to each one:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th style="text-align:left"></th>
<th style="text-align:left">Replit</th>
<th style="text-align:left">V0</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">prompt process</td>
<td style="text-align:left">no questions throughout</td>
<td style="text-align:left">had optoins to choose from while prompting, explained that some Spotify endpoints were deprecated and provided alternative</td>
</tr>
<tr>
<td style="text-align:left">stack</td>
<td style="text-align:left">Express, React, Vite, PostgreSQL, Tailwind, shadcn\ui</td>
<td style="text-align:left">Next.js, React, Tailwind, shadcn/ui</td>
</tr>
<tr>
<td style="text-align:left">name of app</td>
<td style="text-align:left">Tempo Tuner then TempoShift</td>
<td style="text-align:left">Tempo DJ for project, Spotify song finder for code project</td>
</tr>
<tr>
<td style="text-align:left">does it work</td>
<td style="text-align:left">yes. needed to remove deprecated api call after more prompts</td>
<td style="text-align:left">UI works, but not everything</td>
</tr>
<tr>
<td style="text-align:left">model</td>
<td style="text-align:left">Anthropic Claude 3.5 Sonnet</td>
<td style="text-align:left">often Claude-based, uses Vercel API</td>
</tr>
<tr>
<td style="text-align:left">monitoring</td>
<td style="text-align:left">On the console only in an IDE tab, I think</td>
<td style="text-align:left">Logs and deployment info on vercel.com page</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>Both the voice recognition parts work - and use the browser's native Web Speech API, so no extra library needed. The sound is usually processed on the cloud, rather than on device, for example for chrome.</p><p>v0 UI</p><figure class="kg-card kg-image-card"><img src="https://www.idontunderstand.it/content/images/2026/03/image.png" class="kg-image"></figure><p>Replit UI</p><figure class="kg-card kg-image-card"><img src="https://www.idontunderstand.it/content/images/2026/03/image-1.png" class="kg-image"></figure><p>From a product point of view, for the first pass of each app:</p><ul><li>v0 has a search bar which I didn't ask for but like</li><li>Replit has buttons as a fallback for the mic which are nice</li><li>Replit also has tiles for the next song depending on what you last did - but I had to fix it to work via prompts as initially it showed the same song</li></ul><h2 id="more-generally">More Generally</h2><p>Of note is that v0 is known for making UIs - and comparing the two more generally, not just for this project:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th style="text-align:left"></th>
<th style="text-align:left">Replit</th>
<th style="text-align:left">V0</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">deployment</td>
<td style="text-align:left">IDE that deploys</td>
<td style="text-align:left">creates code but need vercel to deploy</td>
</tr>
<tr>
<td style="text-align:left">usual stack</td>
<td style="text-align:left">many languages and frameworks</td>
<td style="text-align:left">React, Next.js, Tailwind CSS, shadcn/ui, does not typically create backend code - you add Next.js routes, external db services, serverless functions etc</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>To note is that:</p><ul><li>Vercel also created Next.js.</li><li>The use of Tailwind CSS and shadcn makes sense for AI - shadcn/ui allows you to edit component code rather than use a library, and tailwind makes styling composable text which AI is good at</li><li>I asked replit to create a fitbit watch face - which requires a very certain stack, the fitbit sdk - and it simply said it would make me a web app for this, which was disappointing. For this, things which assist coding, rather than generate entire apps, are better, such as Cursor, GitHub Copilot etc. </li></ul>]]></content:encoded></item><item><title><![CDATA[Spotify vs SoundCloud APIs]]></title><description><![CDATA[<p>I use both Spotify and SoundCloud (music streaming services) and both have APIs which I was interested to check out.</p><p>When I think of both the first things that come to mind are that Spotify has certain features only on its desktop version (like making folders for playlists, which can</p>]]></description><link>https://www.idontunderstand.it/spotify-vs-soundcloud-apis/</link><guid isPermaLink="false">66402ea793dc9e04cdff426c</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sun, 15 Mar 2026 01:42:28 GMT</pubDate><content:encoded><![CDATA[<p>I use both Spotify and SoundCloud (music streaming services) and both have APIs which I was interested to check out.</p><p>When I think of both the first things that come to mind are that Spotify has certain features only on its desktop version (like making folders for playlists, which can then show on mobile), and Spotify does not allow adding metadata like notes, which SoundCloud does.</p><p>SoundCloud started in Berlin in 2007 as a platform for musicians to easily share and collaborate on audio files. Spotify was founded in 2006 in Sweden to built to combat music piracy.</p><p>Which got me thinking about the differences, which include:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th style="text-align:left"></th>
<th style="text-align:left">SoundCloud</th>
<th style="text-align:left">Spotify</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">Who uploads the music</td>
<td style="text-align:left">Anyone can upload their own tracks, remixes, or podcasts. It’s an open platform for creators.</td>
<td style="text-align:left">Music usually comes through labels, distributors, or aggregators. It’s more curated and official.</td>
</tr>
<tr>
<td style="text-align:left">Discovery</td>
<td style="text-align:left">Known for underground, indie, and emerging artists. Many famous artists started there.</td>
<td style="text-align:left">Focuses more on mainstream, polished releases. Their discovery tools (like playlists, algorithms) lean toward popular music.</td>
</tr>
<tr>
<td style="text-align:left">Interaction</td>
<td style="text-align:left">Listeners can comment directly on tracks (even at specific time stamps), like social media for audio.</td>
<td style="text-align:left">No direct interaction with artists; it’s mainly for streaming.</td>
</tr>
<tr>
<td style="text-align:left">Access</td>
<td style="text-align:left">Free with ads, or subscription for ad-free and offline.</td>
<td style="text-align:left">Subscription-based, though Spotify has a free ad-supported version.</td>
</tr>
<tr>
<td style="text-align:left">Community vibe</td>
<td style="text-align:left">Feels like a grassroots platform for discovery and creativity.</td>
<td style="text-align:left">Feels like a polished streaming service for everyday listening.</td>
</tr>
<tr>
<td style="text-align:left">Number of Songs</td>
<td style="text-align:left">Over 100 million</td>
<td style="text-align:left">over 375 million tracks from more than 40 million artists</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>In summary, SoundCloud is more for creators and music discovery, while Spotify is more for mainstream listening convenience.</p><h2 id="api-differences">API Differences</h2><p>Getting more in to the details of the APIs:</p><!--kg-card-begin: markdown--><table>
<thead>
<tr>
<th style="text-align:left"></th>
<th style="text-align:left">SoundCloud API</th>
<th style="text-align:left">Spotify API</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">Main Purpose</td>
<td style="text-align:left">To allow uploading, sharing, and streaming of user-generated audio content.</td>
<td style="text-align:left">To let developers integrate Spotify’s catalog, playlists, and user data into apps.</td>
</tr>
<tr>
<td style="text-align:left">Acess Model</td>
<td style="text-align:left">OAuth 2.0</td>
<td style="text-align:left">OAuth 2.0 for authentication.</td>
</tr>
<tr>
<td style="text-align:left">Data Available</td>
<td style="text-align:left">Track metadata (title, artist, waveform, comments, likes). User data (profiles, followers, following, favorites). Playlist data. You can actually stream audio through the API (if licensing allows). Historically, SoundCloud allowed embedding and streaming directly from apps.</td>
<td style="text-align:left">Track, album, and artist metadata (IDs, names, popularity, audio features like tempo, danceability, etc.). Public playlists and user-created playlists (with permission). User profile data (with permission). Playback control (start/stop/skip if the user is playing Spotify on a device). You can actually stream audio through the API (if licensing allows). Historically, SoundCloud allowed embedding and streaming directly from apps.</td>
</tr>
<tr>
<td style="text-align:left">Limitations</td>
<td style="text-align:left">More basic metadata</td>
<td style="text-align:left">No direct audio streaming via the API — you can’t just pull an MP3. You only get metadata and playback controls (the actual stream plays via Spotify’s apps or SDKs). Strict on commercial use — Spotify controls how and where you can use it.</td>
</tr>
<tr>
<td style="text-align:left">Use Case</td>
<td style="text-align:left">Artists can upload tracks through the API — something Spotify doesn’t allow. Building music apps that play SoundCloud tracks, social music discovery apps, or upload tools for creators.</td>
<td style="text-align:left">Building apps that show “Now Playing,” analyze playlists, recommend tracks, or create music discovery apps.</td>
</tr>
</tbody>
</table>
<!--kg-card-end: markdown--><p>Another thing to note is that Spotify <a href="https://developer.spotify.com/blog/2024-11-27-changes-to-the-web-api">locked down a lot of endpoints with data</a> - likely because it could be stolen and it is IP. It does mean less data for any hobby projects though!</p><p>In another post I'll use the Spotify API in a project!</p><h2></h2>]]></content:encoded></item><item><title><![CDATA[GitOps and Flux]]></title><description><![CDATA[<p>What is <a href="https://www.atlassian.com/git/tutorials/gitops">GitOps</a>:</p><p><em>GitOps is code-based infrastructure and operational procedures that rely on Git as a source control system. It’s an evolution of <a href="https://www.atlassian.com/microservices/cloud-computing/infrastructure-as-code">Infrastructure as Code (IaC)</a> and a <a href="https://www.atlassian.com/devops/what-is-devops/devops-best-practices">DevOps best practice</a> that leverages Git as the single source of truth, and control mechanism for creating, updating, and deleting</em></p>]]></description><link>https://www.idontunderstand.it/gitops-and-flux/</link><guid isPermaLink="false">66f7e1b993dc9e04cdff4494</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sun, 08 Jun 2025 09:38:22 GMT</pubDate><content:encoded><![CDATA[<p>What is <a href="https://www.atlassian.com/git/tutorials/gitops">GitOps</a>:</p><p><em>GitOps is code-based infrastructure and operational procedures that rely on Git as a source control system. It’s an evolution of <a href="https://www.atlassian.com/microservices/cloud-computing/infrastructure-as-code">Infrastructure as Code (IaC)</a> and a <a href="https://www.atlassian.com/devops/what-is-devops/devops-best-practices">DevOps best practice</a> that leverages Git as the single source of truth, and control mechanism for creating, updating, and deleting system architecture.</em></p><p>What is <a href="https://fluxcd.io/flux/">flux</a>:</p><p><em>Flux is a tool for keeping Kubernetes clusters in sync with sources of configuration (like Git repositories), and automating updates to configuration when there is new code to deploy.</em></p><p>I have been in a position where I was familiar with kubernetes and helm, but not flux. I was viewing helm logs, that were generated by flux, and began to wonder how they were actually generated. Turns out that flux has <a href="https://fluxcd.io/flux/releases/controllers/">controllers</a> and there is a helm controllers that controls this.</p><p>I even asked about it in the online community, and got examples like: </p><ul><li>this <a href="https://github.com/fluxcd/helm-controller/blob/07c0a0b3158e671928ee1eeb2158f8921bb03d59/internal/action/log.go#L37">ring buffer logger</a></li><li>an example of it being <a href="https://github.com/fluxcd/helm-controller/blob/main/internal/reconcile/install.go#L72">put to use</a></li><li>persistence to <a href="https://github.com/fluxcd/helm-controller/blob/main/internal/reconcile/install.go#L160">event data</a></li></ul><p>Other than logs, things can also be captured in events for the HelmRelease object.</p><p>Overall, flux is an interesting product, and seems to not be going anywhere despite its parent company <a href="https://sdtimes.com/softwaredev/state-of-gitops-now-in-flux-as-weaveworks-shuts-down/">shutting down</a>.  </p>]]></content:encoded></item><item><title><![CDATA[Observability]]></title><description><![CDATA[<p>This post defines observability, gives an example of a user issue, and then defines OpenTelemetry, which should be used, to easily pin point user issues in an application by looking at the application outputs. This is critical to good developer experience, which saves developers time.</p><h2 id="observing-a-system">Observing a System</h2><p>Software engineers</p>]]></description><link>https://www.idontunderstand.it/observability/</link><guid isPermaLink="false">66f7e1a893dc9e04cdff448e</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sun, 08 Jun 2025 09:09:16 GMT</pubDate><content:encoded><![CDATA[<p>This post defines observability, gives an example of a user issue, and then defines OpenTelemetry, which should be used, to easily pin point user issues in an application by looking at the application outputs. This is critical to good developer experience, which saves developers time.</p><h2 id="observing-a-system">Observing a System</h2><p>Software engineers often spend a large amount of time trying to solve user issues that might not be easily reproducible. Sometimes, system logs don't have enough information and one is left guessing at what happened, using a process of elimination while reading code, to guess what happened.</p><p>In theory, we should be able to pin point any issues in the code from outputs of the application - that is, you should be able to tell the internal state of a system, by observing it.</p><p>This is where observability comes in to the picture. The <a href="https://en.wikipedia.org/wiki/Observability_(software)">Wikipedia definition</a> is:</p><p><em>In <a href="https://en.wikipedia.org/wiki/Software_engineering">software engineering</a>, more specifically in <a href="https://en.wikipedia.org/wiki/Distributed_computing">distributed computing</a>, <strong>observability</strong> is the ability to collect data about programs' execution, modules' internal states, and the communication among components.<sup><a href="https://en.wikipedia.org/wiki/Observability_(software)#cite_note-1">[1]</a><a href="https://en.wikipedia.org/wiki/Observability_(software)#cite_note-2">[2]</a></sup> To improve observability, software engineers use a wide range of <a href="https://en.wikipedia.org/wiki/Log_file">logging</a> and <a href="https://en.wikipedia.org/wiki/Tracing_(software)">tracing</a> techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to <a href="https://en.wikipedia.org/wiki/Site_reliability_engineering">site reliability engineering</a>, as it is the first step in triaging a service outage. One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue.</em></p><h2 id="example-of-an-issue">Example of an Issue</h2><p>An example of an issue that could not be reproduced is intermittent duplicate API calls being made by the browser after a user clicks a button to submit information. </p><p>The metrics, logs and traces, were not detailed enough to provide the reason why this was happening. The API call was made idempotent to ensure the API call submission only changes state once, which meant that the user was ok and that form submission only happened once.</p><p>But let's dig in to what could be observed:</p><ul><li>browser network tab - shows one call</li><li>front end library that does the post - middleware showed two posts</li><li>network gateway logs show two posts</li><li>infrastructure (kubernetes - service and pod) logs showed nothing unusual</li><li>application logs show two posts, sometimes on two different kubernetes pod</li></ul><p>Even with all this information, nothing could pin point the issue.</p><h2 id="developer-experience">Developer Experience</h2><p>The above scenario is like looking for a needle in a haystack, which is not fun, and should not be the case with well developed apps.</p><p>You might resort to a process of elimination, and <a href="https://www.atlassian.com/team-playbook/plays/5-whys">asking why 5 times</a>, like in a retrospective. </p><p>To add to the haystack, application logs may also use non-standard terms, or even use existing standard terms for other purposes (like correlation id and span id), as many libraries are catching up with Open Telemetry such as those noted <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry">here</a> and <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/app/asp-net-dependencies">here</a>.</p><p>The answer to at least make the haystack clearer, is OpenTelemetry.</p><h2 id="opentelemetry">OpenTelemetry</h2><p><a href="https://opentelemetry.io/docs/what-is-opentelemetry/">OpenTelemetry </a>is:</p><p>An <strong><strong><a href="https://opentelemetry.io/docs/concepts/observability-primer/#what-is-observability">observability</a> framework and toolkit</strong></strong> designed to facilitate the</p><ul><li><a href="https://opentelemetry.io/docs/concepts/instrumentation">Generation</a></li><li>Export</li><li><a href="https://opentelemetry.io/docs/concepts/components/#collector">Collection</a>of <a href="https://opentelemetry.io/docs/concepts/signals/">telemetry data</a> such as <a href="https://opentelemetry.io/docs/concepts/signals/traces/">traces</a>, <a href="https://opentelemetry.io/docs/concepts/signals/metrics/">metrics</a>, and <a href="https://opentelemetry.io/docs/concepts/signals/logs/">logs</a>.</li><li><strong><strong>Open source</strong></strong>, as well as <strong><strong>vendor- and tool-agnostic</strong></strong>, meaning that it can be used with a broad variety of observability backends, including open source tools like <a href="https://www.jaegertracing.io/" rel="noopener">Jaeger</a> and <a href="https://prometheus.io/" rel="noopener">Prometheus</a>, as well as commercial offerings. OpenTelemetry is <strong><strong>not</strong></strong> an observability backend itself.</li></ul><p>A major goal of OpenTelemetry is to enable easy instrumentation of your applications and systems, regardless of the programming language, infrastructure, and runtime environments used.</p><p>So the lesson is - use OpenTelemetry!</p>]]></content:encoded></item><item><title><![CDATA[How to Explain "Technical Things"]]></title><description><![CDATA[<p>The battle between tech and product is never ending, and the question of why technical debt should be worked on can be hard for people, who do not have an engineering background, to understand.</p><p>This post lists some ways to communicate "technical things".</p><h2 id="analogies">Analogies</h2><p>Use the following analogies of the</p>]]></description><link>https://www.idontunderstand.it/how-to-explain-technical-things/</link><guid isPermaLink="false">678c4f0c93dc9e04cdff453d</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sat, 29 Mar 2025 07:29:57 GMT</pubDate><content:encoded><![CDATA[<p>The battle between tech and product is never ending, and the question of why technical debt should be worked on can be hard for people, who do not have an engineering background, to understand.</p><p>This post lists some ways to communicate "technical things".</p><h2 id="analogies">Analogies</h2><p>Use the following analogies of the state of things:</p><ul><li>a clean room needs regular cleaning, though if one day is missed it won't make a difference</li><li>a working car needs regular maintenance, though if one day is missed it won't make a difference</li></ul><p>Use the following analogies for habits:</p><ul><li>going for a run or going to the gym - missing one day is ok, but not every day.</li></ul><p>It's the same with technical debt. Missing it for a day won't make much impact, but if it is never done, the results will build up and negatives will surface.</p><h2 id="business-impact">Business Impact</h2><p>Communicate any of the below impacts:</p><ul><li>speed - such as every feature taking longer to build if we have outdated code</li><li>blockers - such as tech debt of deprecated libraries meaning they aren't supported anymore and can post security risks, and that won't work with other components</li><li>reliability - unresolved tech debt increases the risks of outages or bugs, which can impact customer negatively</li><li>cost - the longer we delay addressing tech debt, the more it expensive it becomes to address</li></ul><h2 id="present-a-plan">Present a Plan</h2><ul><li>Break down the work in to manageable tasks</li><li>From the above business impact points - show how investing time in tech debt will increase efficiency, reduce costs or enhance stability</li><li>suggest a phased approach such as 20% of each sprint being tech debt</li></ul><h2 id="provide-evidence">Provide Evidence</h2><ul><li>Show metrics such as customer complaints and slow delivery times</li><li>Demonstrate past successes of addressing tech debt</li></ul><p>Tie all of the above to business goals like lower cost and better use experience.</p>]]></content:encoded></item><item><title><![CDATA[Design for the Data Pattern: Entity-Attribute-Value Model]]></title><description><![CDATA[<p>The <a href="https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model">Entity-Attribute-Value</a> model is a common one in a lot of products. It allows you to add a key and a value pair to any "thing"/entity. It's often also called a custom field. Such as adding amenities to a hotel, or opening hours to a restaurant.</p><p>Another definition is</p>]]></description><link>https://www.idontunderstand.it/data-pattern-entity-attribute-model/</link><guid isPermaLink="false">678c4f8e93dc9e04cdff4543</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sat, 29 Mar 2025 07:10:18 GMT</pubDate><content:encoded><![CDATA[<p>The <a href="https://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model">Entity-Attribute-Value</a> model is a common one in a lot of products. It allows you to add a key and a value pair to any "thing"/entity. It's often also called a custom field. Such as adding amenities to a hotel, or opening hours to a restaurant.</p><p>Another definition is that it is user-defined data field within a software system that allows users to store and organize additional information specific to their needs or business processes.</p><p>You can find examples in many places such as in <a href="https://support.atlassian.com/jira-cloud-administration/docs/create-a-custom-field/">JIRA</a>, travel websites (listing custom fields for hotels for example) and a lot of customer relationship management (CRM) software.</p><p>In this post, let's discuss, how you would design a solution for this, at the high level.</p><h2 id="requirements">Requirements</h2><ul><li>store key value pairs for an entity like storing that a pool (value) is an amenity (key) for a hotel (entity)</li><li>definitions to be set up by admin</li><li>values to be entered by user</li><li>values to be of many types such as text, date, boolean and multi list (both single and multi select)</li></ul><h2 id="database-design">Database Design</h2><p>You could use any type of database. But let's say you are bound to a relational database.</p><p>One solution is to have a table for each of:</p><ul><li>the custom field definition</li><li>the custom field value, if it's not a multi list</li><li>multi list options</li><li>for multi list custom fields, a bridging table to list chosen list options for a custom field</li></ul><p>For tables, design the following for your use case:</p><ul><li>have native data types for each of the data types (such as text, date, boolean) as this will prevent problems later with data integrity, and make data selects easier such as when selecting a date range with actual dates, rather than text that has to be cast to dates for a select</li><li>why data types were chosen and the length of each field</li><li>uniqueness and constraints</li><li>sorting and indexes</li></ul><h2 id="design-considerations">Design Considerations</h2><p>You should limit the number of custom fields so that users</p><ul><li>don't exploit it (for example if it's free as a data store). A higher limit could have a fee to prevent that if needed.</li><li>don't put a high load on infrastructure that it is not designed for</li><li>won't have a UI that was not designed for, say with thousands of custom fields on the page if it's not designed for that.</li></ul><p>You should also validate the name of the custom field to ensure the name is not an existing name via blocklist, as otherwise it will look like a non custom field to the user on the UI (unless there is a visual difference for custom fields, but let's assume there is not).</p><p>Since deletion by accident is a big loss for customers in for example CRMs where they have put in effort to input the data - to protect against that you should, on delete, have a UI warning, and do soft delete (in addition to any audit tables).</p><p>For security, you should also sanitise input to protect against injection attacks.</p><p>For all these things be clear on who is making the decision, such as whether it is product or tech.</p><p></p>]]></content:encoded></item><item><title><![CDATA[How Many Bugs is too Many?]]></title><description><![CDATA[<p>You are in a team and there are a lot of bugs. You ask, how many bugs are too many? And how do you balance the work? And how about the team morale?</p><h2 id="balance-of-work">Balance of Work</h2><p>In times like this, work competes between fixing the bugs, doing product work and</p>]]></description><link>https://www.idontunderstand.it/how-many-bugs-is-too-many/</link><guid isPermaLink="false">6780ed0493dc9e04cdff4526</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sat, 25 Jan 2025 09:08:48 GMT</pubDate><content:encoded><![CDATA[<p>You are in a team and there are a lot of bugs. You ask, how many bugs are too many? And how do you balance the work? And how about the team morale?</p><h2 id="balance-of-work">Balance of Work</h2><p>In times like this, work competes between fixing the bugs, doing product work and doing tech debt work. So how do you balance these? You balance these by "spending" an error budget.</p><p>An <a href="https://www.atlassian.com/incident-management/kpis/error-budget">error budget</a> is is the maximum amount of time that a technical system can fail without contractual consequences. So you can simply use that, to gauge whether you have to address bugs. But, for that you need <a href="https://www.atlassian.com/incident-management/kpis/sla-vs-slo-vs-sli">SLOs and SLAs</a>.</p><p>From the above link:</p><p><em>Error budgets aren’t just a convenient way to make sure you’re meeting contractual promises. They’re also an opportunity for development teams to innovate and take risks. </em></p><p><em>The development team can ‘spend’ this error budget in any way they like. If the product is currently running flawlessly, with few or no errors, they can launch whatever they want, whenever they want. Conversely, if they have met or exceeded the error budget and are operating at or below the defined SLA, all launches are frozen until they reduce the number of errors to a level that allows the launch to proceed.</em></p><p>If you don't have SLOs and SLAs, then the error budget might be a bit more subjective. If you always have a contract of 80% feature work and 20% tech work as is common in many companies, this might be all you have to deal with.</p><h2 id="morale">Morale</h2><p>The morale might not be affected if it's a high performing team with a blame free culture. But for teams that might be newer or less mature, you might need to tread carefully before openly discussing there being too many bugs. To make the conversation more objective, try to use the above to steer the conversation. But even with that, if you are in a team with a 80% feature work and 20% tech work split, but there are so many bugs that you spend 80% of the time fixing bugs, then you need to have a conversation with key stakeholders and leadership to make people aware of what's happening so that expectations are met.</p>]]></content:encoded></item><item><title><![CDATA[Tests - How Many Tests are Enough, and Why?]]></title><description><![CDATA[<p>What tests would you need to feel comfortable deploying code to production without any manual testing? That's the answer to how many tests are enough.</p><p>Let's break that down. A textbook answer would be:</p><ul><li>lots of unit tests in the application</li><li>fewer integration tests like postman tests</li><li>even fewer end</li></ul>]]></description><link>https://www.idontunderstand.it/tests-how-many-are-enough/</link><guid isPermaLink="false">67557a2d93dc9e04cdff44b6</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sat, 25 Jan 2025 08:55:04 GMT</pubDate><content:encoded><![CDATA[<p>What tests would you need to feel comfortable deploying code to production without any manual testing? That's the answer to how many tests are enough.</p><p>Let's break that down. A textbook answer would be:</p><ul><li>lots of unit tests in the application</li><li>fewer integration tests like postman tests</li><li>even fewer end to end tests like cypress tests</li></ul><p>However, I don't agree with that answer.</p><p>On the unit tests - they can be a by-product of test driven development. But once they exist, they may not be needed if integration tests are sufficient to test the inputs and outputs of the system. In fact, if large scale refactoring is done, then a disadvantage would be that unit tests would also need to be refactored. While integration tests would not need change with a large scale refactoring because they don't rely on the internal implementation. Inputs and outputs are </p><p>On integration tests - testing all business cases is what is relevant. Instead of all possible permutation to meet 100% code coverage.</p><p>On end to end tests - they are the final piece to avoid manual testing and are super useful.</p><p>So to answer how many tests are enough - again, it's enough to feel confident. And that may not need to include unit tests at all.</p>]]></content:encoded></item><item><title><![CDATA[Microphone Setup]]></title><description><![CDATA[<p>"You're on mute" is really the only way to start an online meeting. The beginning of any meeting is really just audio set up for most people, which should be simple but really is not. I'm making this page as a quick reference for microphone setup since I've had to</p>]]></description><link>https://www.idontunderstand.it/microphone-setup/</link><guid isPermaLink="false">6780e62793dc9e04cdff44ca</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Fri, 10 Jan 2025 09:40:12 GMT</pubDate><content:encoded><![CDATA[<p>"You're on mute" is really the only way to start an online meeting. The beginning of any meeting is really just audio set up for most people, which should be simple but really is not. I'm making this page as a quick reference for microphone setup since I've had to do a lot of microphone configuration not just for online meetings, but to make sure that I have good quality while using <a href="https://idontunderstand.it/have-voice-control-can-work/">voice control described in my other post</a>.</p><p>In fact, I came across crazy stories where laptops are not configurable for the user, where noise cancelling is permanently on, to the point that a user cannot transmit/record playing a musical instrument over their laptop microphone, because the instrument is filtered out and they cannot do anything about it, not even by configuring anything differently.</p><p>I had a similar issue when using voice control where a "pop" sound was filtered out by the microphone configuration, but I required it to be transmitted via the microphone.</p><p>So this post is a quick reference on how to turn off noise cancelling if your laptop allows it - and since the menus here are so comprehensive, it's also a reference for how to configure microphones in general. It also mentions how to configure gain (amplification) which can fix a lot of setup issues. It is Windows based.</p><h2 id="new-windows-menu-sound-settings">"New" Windows Menu Sound Settings</h2><p>Select the microphone below, and ensure audio enhancements is off. Also make "input volume" the max (this is software gain and should change the old control panel, but sometimes does not).</p><figure class="kg-card kg-image-card"><img src="https://www.idontunderstand.it/content/images/2025/01/image.png" class="kg-image"></figure><h2 id="old-control-panel">"Old" Control Panel</h2><p>In the old windows menu (Control Panel &gt; Hardware and Sound &gt; Manage audio devices) on microphone, double click specific microphone, go to advanced and untick noise cancelling if it's there - untick "enable audio enhancements". Also in tab "levels" (this is software gain) - set the gain to high:</p><figure class="kg-card kg-image-card"><img src="https://www.idontunderstand.it/content/images/2025/01/image-1.png" class="kg-image"></figure><h2 id="other-software">Other Software</h2><p>A lot of laptops come with OS software (like DELL) that includes the microphone, or various brands like audio max pro, alienware command centre etc. Ensure you've checked all the settings in those too. Note that if such software is uninstalled - it may still have a service running somehow, so check that, and stop the service if needed.</p><p>Also check any <a href="https://en.wikipedia.org/wiki/Microsoft_PowerToys">custom software</a> like those which allow you to mute and unmute via a user interface.</p><p>If you have an audio interface which I've had in the past, and have a fancy mic, you will likely have even more software and configuration available, so check all of their options.</p>]]></content:encoded></item><item><title><![CDATA[Reducing Deployment Time]]></title><description><![CDATA[<p>Ideally deploying to prod takes 5 minutes and is just a few steps. There is high observability, useful alerts, no manual QA, master commits deploy straight to prod and it's easy to rollback.</p><p>But at so many companies it is a whole lot longer with a whole lot more steps,</p>]]></description><link>https://www.idontunderstand.it/reducing-deployment-time/</link><guid isPermaLink="false">66c331a293dc9e04cdff437a</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Tue, 20 Aug 2024 12:56:50 GMT</pubDate><content:encoded><![CDATA[<p>Ideally deploying to prod takes 5 minutes and is just a few steps. There is high observability, useful alerts, no manual QA, master commits deploy straight to prod and it's easy to rollback.</p><p>But at so many companies it is a whole lot longer with a whole lot more steps, say 20 steps and 1 hour. The question then is, how to reduce deployment time. Let's define a few things before we get to that.</p><h2 id="how-often-should-you-deploy-and-why">How Often Should You Deploy and Why?</h2><p>One of the <a href="https://dora.dev/guides/dora-metrics-four-keys/">DORA </a>(DevOps Research and Assessment) metrics is frequency of deployments.</p><p><a href="https://www.atlassian.com/devops/frameworks/devops-metrics">Various sources</a> all point to the following:</p><ul><li>High-performing teams can deploy changes on demand, and often do so many times a day. </li><li>Lower-performing teams are often limited to deploying weekly or monthly.</li></ul><h3 id="but-why">But Why</h3><p>Deploying frequently has many benefits, let's list just a few here:</p><ul><li>smaller changes means less risk, and incremental QA, and teams being agile to break work down</li><li>no merge conflicts or other dependencies that come with larger waterfall releases</li></ul><p>So the answer is - ideally, you should deploy many times a day because it is agile and reduces risk.</p><h2 id="one-deployment-per-day-per-team-reality-or-just-theory">One Deployment per Day per Team - Reality or Just Theory?</h2><h3 id="the-industry">The Industry</h3><p>I have asked people I know if teams deploy once per day, and the answers I got were:</p><ul><li>people at large tech companies said yes, they deploy at least once a day per team. They have no manual QA and commits to master are deployed to prod automatically.</li><li>but most people said 1-2 deployments every 1-2 weeks.</li></ul><p>I personally have been in both kinds of teams above.</p><h2 id="how-not-to-reduce-deployment-time-kpis">How Not To Reduce Deployment Time - KPIs</h2><p>Many companies set up things like KPIs to deploy to prod every day, especially if their deployment times are long, to drive down the deployment time.</p><h3 id="good-intent-but-often-achieves-the-opposite">Good Intent - But Often Achieves the Opposite</h3><p>While this KPI has good intent, because it is a KPI, this prioritises people to meet the KPI today, tomorrow, and the next day, and every day of their working lives - and it prioritises that over actually reducing deployment time. That is the core issue with such KPIs. The KPI may thus actually achieve the opposite of what it intended to achieve.</p><p>Additionally, people might not just deploy daily, but they may game the system to achieve more releases - such as deploying code without value, striving to deploy only things which are microservices and likely to have more deployments (over a monolith), having more people in the team meaning there are more deployments per team - etc.</p><p>So often what happens is that people do whatever it takes to meet the KPI, even if this includes wasting time. That wasted time could instead have been spent reducing deployment time. As for how to do that - it's in the next section!</p><h3 id="positives-and-negatives-of-the-kpi">Positives and Negatives of the KPI</h3><p>Let's imagine if the KPI was abolished. Teams may then no longer care about deployment time. So there is reason to the KPI - to force people to be frustrated to the point that they take action themselves. This however can lead to employee disengagement and burnout.</p><p>So it's a lose-lose situation. If you keep the KPI, teams are wasting time. If you abolish the KPI, teams may not reduce deployment time.</p><p>Perhaps what's needed is other goals - to simply reduce deployment time.</p><h2 id="how-to-actually-reduce-deployment-time">How To Actually Reduce Deployment Time</h2><h3 id="other-company-s-journeys">Other Company's Journeys</h3><p>Various online journeys include:</p><ul><li><a href="https://product.hubspot.com/blog/how-we-deploy-300-times-a-day">HubSpot</a> - they removed any manual steps</li><li><a href="https://monzo.com/blog/2022/05/16/how-we-deploy-to-production-over-100-times-a-day">Monzo </a>- they optimised engineering culture, tooling, and architecture</li></ul><p>None of these journeys mention a KPI.</p><p>They mention metrics - but those are app metrics, not team metrics.</p><h3 id="quick-wins">Quick Wins</h3><p>A team alone can get some quick wins, which a KPI would drive, including:</p><ul><li>automating messages to chat channels when release happens, or a pr is raised</li><li>having feature flag toggles that take effect instantly</li><li>optimising build pipelines, for example to run steps in parallel, to cache relevant things, and to make tests run faster</li></ul><h3 id="longer-term-gains">Longer Term Gains</h3><p>A team alone can't achieve the larger cultural and process shifts needed for big reductions in deployment time - a KPI likely won't drive teams to this - unless maybe a few individuals step up and take ownership in addition to their normal work:</p><ul><li>commits in master deploying to prod automatically</li><li>no manual QA</li><li>a <a href="https://www.atlassian.com/devops/what-is-devops/devops-culture">DevOps culture</a> - closer collaboration and a shared responsibility between development and operations for the products they create and maintain</li></ul><h3 id="if-it-s-not-working">If It's Not Working</h3><p>If teams are trying to reduce deployment time but it's not working, ask:</p><ul><li>are devs not automating tests, and why</li><li>what is the complexity of system</li><li>are there any integration tests</li><li>is the culture such that people think deployments need a "human touch"? Such as letting people know of  a big release hours/days in advance - this can still happen with daily deployments, via usual communication and collaboration.</li></ul><h2 id="closing-notes">Closing Notes</h2><p>All of the above needs to be taken in to context - sometimes you don't deploy to prod for weeks, because maybe you are designing something, and sometimes you deploy to prod several times a day. Ideally though, things are at least set up for frequent deployments.</p>]]></content:encoded></item><item><title><![CDATA[The "Microservices" Buzzword]]></title><description><![CDATA[<p>"Microservices" is a bit of a buzzword - is it just a service that is micro/small? Should it always be used? The answer to both of these questions is no.</p><h2 id="history-of-microservices">History of Microservices</h2><p>The buzzword seems to have existed since the early 2000s, but Netflix seems to have pioneered</p>]]></description><link>https://www.idontunderstand.it/microservices-buzzword/</link><guid isPermaLink="false">664001bb93dc9e04cdff4249</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Fri, 24 May 2024 11:23:14 GMT</pubDate><content:encoded><![CDATA[<p>"Microservices" is a bit of a buzzword - is it just a service that is micro/small? Should it always be used? The answer to both of these questions is no.</p><h2 id="history-of-microservices">History of Microservices</h2><p>The buzzword seems to have existed since the early 2000s, but Netflix seems to have pioneered the usage of microservices around 2008. They became popular due to their high scalability.</p><h2 id="definitions">Definitions</h2><h3 id="microservice">Microservice</h3><p><a href="https://en.wikipedia.org/wiki/Microservices">Wikipedia </a>defines microservices as:</p><p>In <a href="https://en.wikipedia.org/wiki/Software_engineering">software engineering</a>, a <strong>microservice</strong> architecture is a variant of the <a href="https://en.wikipedia.org/wiki/Service-oriented_architecture">service-oriented architecture</a> structural style. It is an <a href="https://en.wikipedia.org/wiki/Architectural_pattern">architectural pattern</a> that arranges an application as a collection of <a href="https://en.wikipedia.org/wiki/Loose_coupling">loosely coupled</a>, <a href="https://en.wikipedia.org/wiki/Granularity">fine-grained</a> services, communicating through <a href="https://en.wikipedia.org/wiki/Lightweight_protocol">lightweight protocols</a>.</p><p>What's a bit odd is that there is almost no mention of <em><strong>consistency </strong></em>and messages- perhaps they wanted to keep implementations as abstract as possible. But I disagree with that. Some mention of them is required.</p><p><a href="https://youtu.be/p2GlRToY5HI?si=TsoWexvCpYjA2rRh">This talk</a> is what I would consider as more accurate than the above definition. It's an hour long, but well worth watching.</p><p>A good definition from it, is that microservices:</p><ul><li>are composed of small and loosely coupled services, which can be deployed independently</li><li>are services that communicate via messages on the back end</li><li>are owned by small and well contained teams.</li></ul><h3 id="monolith">Monolith</h3><p>To put things in to perspective, the definition of a monolith is that it is built and deployed as one piece, eg changing one thing requires deployment of the whole piece.</p><h2 id="pros-and-cons-of-microservices-vs-a-monolith">Pros and Cons of Microservices vs a Monolith</h2><p>Advantages of microservices</p><ul><li>high availability</li><li>encourages writing loosely coupled code</li><li>encourages agile processes and teams that are experts in different parts of an application</li></ul><p>Advantages of monoliths</p><ul><li>typically has better data throughput (not necessarily performance) - eg time for an update from front end to back end</li><li>scalable eg via load balancing, adding more memory or cores, virtual/physical hardware changes, for databases could use data sharding</li><li>immediate consistency</li></ul><p>Disadvantages of monolith</p><ul><li>one piece is one tech stack - could be advantage too though</li><li>is not as agile a process as building a microservice</li><li>cannot have small teams focussed on one area</li></ul><h2 id="what-exactly-is-monolithic-either-the-logic-or-physical-app">What Exactly is Monolithic? Either the Logic or Physical App</h2><p>There are 4 combinations of the logic and the physical app, being either monolith is or distributed, and these are:</p><ul><li>if code is modular, and app is physically distributed, it's a true microservice, which is good</li><li>if code is a monolith, and app is physically a monolith, it's a ball of mud monolith, which is bad</li><li>if code is modular, and app is physically a monolith, it's a modular monolith, which is good</li><li>if code is a monolith, and app is physically distributed, it's a distributed monolith</li></ul><p><strong>Note on the Distributed Monolith</strong></p><ul><li>always performs worse than monolith or microservices</li><li>more difficult to maintain</li><li>often has one database - which becomes a bottleneck with a lot of microservices hitting it</li></ul><h2 id="when-you-should-use-microservices">When You Should use Microservices</h2><p>Only if you have a good reason, eg in the case when:</p><ul><li>monoliths are bad or unscalable</li><li>microservices are hard to do well</li><li>Wrong reasons create distributed monoliths</li></ul><p>Good reasons to create them:</p><ul><li>more scalability</li><li>independent deployments</li><li>isolate surface area of failure</li></ul><p>The trade off is that microservices have high availability, but low consistency. Monoliths have immediate consistency, but low availability.</p>]]></content:encoded></item><item><title><![CDATA[Estimating Work]]></title><description><![CDATA[<p>Estimating work is so crucial but is well recognised as being a bit of black magic.  It's used to plan out work, ensure success and keep everyone on the same page. This post aims to break down that black magic, and bring to light some of the context surrounding estimates</p>]]></description><link>https://www.idontunderstand.it/estimating-work/</link><guid isPermaLink="false">6615e14193dc9e04cdff4138</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sat, 11 May 2024 23:27:09 GMT</pubDate><content:encoded><![CDATA[<p>Estimating work is so crucial but is well recognised as being a bit of black magic.  It's used to plan out work, ensure success and keep everyone on the same page. This post aims to break down that black magic, and bring to light some of the context surrounding estimates and things to consider when making them.</p><h2 id="the-wording">The Wording</h2><p>What is more accurate, is to make an educated estimate, rather than just "an estimate". The change in wording makes it clear, that it is an informed thing, rather than a concrete, never changing thing.</p><h2 id="the-audience">The Audience</h2><p>You should ask yourself for whom you are making the estimate as this will likely tell you why you are making it, so that you can include the right level of detail:</p><ul><li>yourself - to plan out your own work relative to other work</li><li>your direct manager, who might report it on to others who report it to the business</li><li>project manager, CTO, other colleagues - to report back to the business</li></ul><p>For example, you might want a breakdown of the estimate, that you can sum up. But a CTO may want a one word answer for how long something will take.</p><h2 id="the-details">The Details</h2><p>On the note of level of detail - a surprising number of people are simply not organised and don't have much level of detail. If you do, it is a gift. However, with such a gift, you don't always want to share a long story about the breakdown of an estimate.</p><p>If you have details, summarising them in to a word or sentence is a skill in itself also.</p><h2 id="that-quantified-estimate">That Quantified Estimate</h2><p>Once you have the details of an estimate, you will have to quantify them somehow, in order to then later be able to summarise it all. Different teams estimate in different ways such as assigning:</p><ul><li>story points</li><li>t-shirt sizes like extra small, small, medium, large, extra large</li><li>amounts of time, like days or weeks</li></ul><p>To do this, a benchmark of previous work is always useful, such as what it means for a story be 5 story points.</p><h3 id="what-is-actually-in-an-estimate">What is Actually in an Estimate</h3><p>Usually estimates are just a measure of the "raw work" in question, and do not include overhead like:</p><ul><li>public holidays, leave</li><li>other overhead like meetings, learning</li></ul><p>But such overhead is crucial when mapping an estimate to an actual date that work might be finished by a team, so these things do need to be considered. The good news is that overhead is generally easier to predict than an estimate of work.</p><p>To add overhead, you could manually try to add some number of hours or days of known overhead to the estimate.</p><p>But some of the best people I know take an estimate, and simply double it before presenting it. It can be hard to defend this with reasons - but a lot of the time some multiplier, like x1.5 or x2 is used to add in both risk mitigation for the actual work, plus overhead, in order to arrive at a final date that work might be completed.</p><h3 id="large-estimates">Large Estimates</h3><p>Large estimates should definitely be broken down, as the higher the estimate, the bigger the margin of error, and also because work generally should not take longer than a sprint to complete, to avoid it dragging on.</p><h3 id="how-long-to-spend-estimating">How Long to Spend Estimating</h3><p>An important point is that there is always a level of risk in estimating, and that we should not spend too long estimating.</p><h3 id="the-final-educated-summarised-estimate">The Final Educated Summarised Estimate</h3><p>A final educated estimate might look sound like having a goal of delivering by the end of month including any public holidays, leave and other overhead.</p><h2 id="reflecting-on-estimates">Reflecting on Estimates</h2><p>Back on the concept that estimating is a black art -  when asking others for examples of past estimates, you may find that there are none to share which is a bit mysterious. Or you might be told to check the backlog - although, it's often impossible to tell with tickets that are done, how long something actually took.</p><p>Team retros are a good place to discuss reflections on work, or separate meetings.</p><h2 id="further-reading-more-sophisticated-methods">Further Reading - More Sophisticated Methods</h2><p>Estimation is a field of its own, and one of the best estimators I know uses the forecasting tools. I won't elaborate on the below - but just leave them here for more reading!</p><p>1. <a href="https://scrumage.com/blog/2015/09/agile-project-forecasting-the-monte-carlo-method/">Simple Monte Carlo forecasting </a>- works on a count of backlog items</p><p>2. Throughput Forecaster (<a href="https://github.com/FocusedObjective/FocusedObjective.Resources/raw/master/Spreadsheets/Throughput%20Forecaster.xlsx">excel file</a>) - has metrics like backlog items and velocity<br></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Role of "Batman": Triaging Ad-Hoc Work]]></title><description><![CDATA[<p>Working in an agile teams, it's common to have to triage ad-hoc work, general questions and also bugs, while having regular sprint work too. In fact some workplaces might have a dedicated person on a roster to do this, to "shield" the team from distractions and allow them to get</p>]]></description><link>https://www.idontunderstand.it/triaging/</link><guid isPermaLink="false">655359f593dc9e04cdff408e</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Tue, 14 Nov 2023 11:50:14 GMT</pubDate><content:encoded><![CDATA[<p>Working in an agile teams, it's common to have to triage ad-hoc work, general questions and also bugs, while having regular sprint work too. In fact some workplaces might have a dedicated person on a roster to do this, to "shield" the team from distractions and allow them to get their usual sprint work done. Some teams I've been in call this being "Batman". Which is like being on call. There can also be an optional "Robin" sidekick, and as explained below there is nothing wrong with asking for more help when needed.</p><p>While it might sound simple, a lot often goes on, particularly in weighing up business priorities, timelines and clarifying what is at hand. </p><p>Generally most batman bugs should go in to BAU (business as usual bugs) for review, unless there is a business reason that it is urgent.</p><p>Once an issue or query is found, to work out the priority it's always good to ask:</p><ul><li>how does this affect the business eg is there a loss of money or reputation? </li><li>to tease this out in more detail, you could also ask  what is consequence if we do not fix/investigate this?</li><li>is this an existing bug?</li><li>how long might it take to fix?</li></ul><p>It's also good to define what Batman might be. For example:</p><ul><li>As hinted at above, Batman is a way to scope ad-hoc work to one person, rather than the whole team - but there is nothing else specific to the batman role.</li><li>It is just a person that is expected to be the point of contact for external requests during a sprint.</li><li>Batman is there to handle anything that is not in the current sprint.</li><li>Batman also completes any urgent work deemed to be part of the sprint after the triage process.</li><li>Batman should respond to questions on chat channels.</li><li>The work that is in testing is part of the current sprint, and the tickets are already assigned. It is not Batman's job to ensure everyone is updating JIRA tickets. Though when issues come up related to recent work, having up to date JIRA tickets can be invaluable, so Batman might ask team members to update JIRA.</li><li>It is also not Batman's job to pick up any bugs raised by testers which occur as a result of current sprint work. Developers should test everything. PRs should obviously only be submitted for code that has been tested by the developer, and that works and satisfies acceptance criteria.</li><li>Testers are the last line of defence, they are there to test more scenarios not picked up by unit, integration and functional tests, potentially taking more time to do - but the code should already satisfy the basic requirements of the story.</li><li>Batman may end up in conversation with testers due to ad-hoc questions, but if a question is related to an existing ticket, Batman is fine to pass those questions on to the relevant person working on the relevant feature.</li></ul><p>Once something is found, elaborating on the above initial questions:</p><ul><li>For things that are deemed significant enough, we can reach out to the business, like the product owner, to check if they can/should be done in the current sprint.</li><li>For the ones that we deem to be just minor bugs, they can just be in the backlog, with sufficient information - without explicitly bringing them to Product Owner's attention.</li><li>For the ones that we may deem to be worthwhile resolving in the short-medium term, we may want to advocate for their inclusion in an upcoming sprint. The Product Owner should be looking at the backlog and prioritising on a regular basis anyway.</li><li>A lot of the discussion specific to individual bugs could take place in JIRA comments - whether it is batman summarising what the current question/conclusion is, or whether it is tagging people to ask a question.</li></ul><p>If tasks that entered the sprint end up being dragged into multiple sprints - it is probably up to the team to decide whether to treat those as ad-hoc work for Batman to verify, triage and possibly fix, or whether to create a new ticket.</p><p>All that being said, all the above are initial guidelines. It is fine to be agile, for example to pull something in to the sprint with good reason, but then pull it out later if new information comes to hand. The key is always to have solid reasons and lines of communication.</p>]]></content:encoded></item><item><title><![CDATA[Code Review Checklist]]></title><description><![CDATA[<p>There are a lot of resources out there that are checklists for reviewing your own code, before sending it for review, such as <a href="https://mtlynch.io/code-review-love/">this </a>one.</p><p>Here is another! It is split in to sections.</p><p>As a priority though, before getting code free of tech debt and perfect, one should deliver</p>]]></description><link>https://www.idontunderstand.it/code-review-checklist/</link><guid isPermaLink="false">63e8356393dc9e04cdff4027</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sun, 12 Feb 2023 00:55:21 GMT</pubDate><content:encoded><![CDATA[<p>There are a lot of resources out there that are checklists for reviewing your own code, before sending it for review, such as <a href="https://mtlynch.io/code-review-love/">this </a>one.</p><p>Here is another! It is split in to sections.</p><p>As a priority though, before getting code free of tech debt and perfect, one should deliver and deploy the business value first if needed, then right after refactor as needed.</p><h2 id="straight-forward-checks">Straight Forward Checks</h2><h3 id="for-future-proofing-">For future proofing:</h3><ul><li>does this work fit in with the future tech direction</li><li>enable new features of the language if relevant</li><li>ask if the best libraries for the use case used</li></ul><h3 id="the-basics-which-need-a-bit-of-thought-">The basics which need a bit of thought:</h3><ul><li>when passing an object in code - it is good to pass the whole one, as signature then won't know about granular workings, and also if is changed in future to use other properties in it, signature won't change - but case by case basis, you can also pass minimal info in</li><li>is the code well tested? Not just are there tests at all, but are the tests ones which would need to be re-written if the implementation changed? Is there a more future-proof way to test this?</li><li>test at business level, abstract only when need to, avoid mocking if appropriate</li></ul><h3 id="the-basics-most-of-which-can-be-or-are-automated-">The basics most of which can be or are automated:</h3><ul><li>anything that would be a intellisense suggestion</li><li>follow best practices</li><li>formatting</li><li>add documentation to methods, even architecture decision records</li></ul><h2 id="feature-specific-things-">Feature Specific Things:</h2><p>Have all of these been considered:</p><ul><li>any service should be a  “technical authority for a business capability"</li><li>domain driven design should be used if appropriate (but don't use things without reasons or because they are buzzwords!), with bounded contexts mapped out</li><li>when naming tests - name as per business, not as per data transfer objects or random properties in the code</li><li>Things that change together live together</li></ul><h2 id="simplicity-clarity-">Simplicity/Clarity:</h2><ul><li>in the pull request description, have you provided a reasonable amount of detail? a good structure is to state functional changes (actual changes to the user) and structural changes (no changes to the user, structural code changes only such as refactoring)</li><li>is the pull request too large</li><li>would this code be confusing to a new hire? Is there a way to make it simpler or easier to understand?</li><li>does prep work need to be done eg refactoring?</li><li>is there a better way to structure this code? Is it following best practices?</li><li>is the code grouped in a way that makes sense? Is the data stored in the best way? e.g. including timezone info for datetimes, using a structured format (as logging fields, or db files, json fields) instead of encoding as (unstructured) string.</li><li>is something being written from scratch when there is a library that would solve this?</li></ul><h2 id="for-when-you-are-the-reviewer">For When You are The Reviewer</h2><p>And lastly, obviously, it's always best to check out the code in an IDE when reviewing others' pull requests.</p>]]></content:encoded></item><item><title><![CDATA[What Does "Senior" Mean in a Title?]]></title><description><![CDATA[<p>The word "senior" in a title can be a bit fluffy. Ask one person, and it differs from company to company. Ask another, and they will say it will be offered freely if the company is in a pinch.</p><p>Ask another, and maybe they will say that mastery and autonomy</p>]]></description><link>https://www.idontunderstand.it/what-does-senior-mean-in-a-title/</link><guid isPermaLink="false">63e8327793dc9e04cdff3fdc</guid><dc:creator><![CDATA[Zoja Savkovic]]></dc:creator><pubDate>Sun, 12 Feb 2023 00:39:22 GMT</pubDate><content:encoded><![CDATA[<p>The word "senior" in a title can be a bit fluffy. Ask one person, and it differs from company to company. Ask another, and they will say it will be offered freely if the company is in a pinch.</p><p>Ask another, and maybe they will say that mastery and autonomy are the key.</p><p>Look around and maybe you will see people leaving companies in the pursuit of titles, and even sometimes going back to their initial company with a title change.</p><p>But what does all that mean, and how can you progress yourself?</p><p>From various sources, the below seem to be reasonable, to be senior, for a software engineer.</p><p>It is worth noting that one does not don't need to meet all of these to be senior, they're all different skills that makes the difference between senior, mid and junior. A lot of these skills come with experience but not everyone makes the effort to learn them so years of experience aren't always a good measure.</p><p>The core needs seem to be:</p><ul><li><strong>technical skills</strong> - can you pick up almost all tasks and get them done with minimal support? Note that this is support and not discussion - you might need to work with other seniors or team members to decide on the approach, but you understand the issue, bring some ideas of your own, and probably have specific questions rather than vague “how to…” type ones.</li><li><strong>project skills </strong>- can you get given something broad and not deeply defined and get it shipped? This would involve everything from discussing with product people to nail down requirements, to breaking down tasks, to being the technical lead, etc. This area varies a lot from company to company. Some places have business analysts, scrum masters, or their team lead gets very involved with this part. So the exact requirements here depend on that. But there would be at least some work here for all seniors.</li><li><strong>people skills</strong> - mentoring junior members, influencing leadership to make good technical decisions, working with team members, and probably a bit across teams.</li><li><strong>being practical and customer focussed</strong> - making sure the work you do contributes to your company’s bottom line, balanced with the technical and people skills to make sure the company isn’t building up too much technical debt, and can sustain itself long term.</li></ul><p>In other roles, senior might also involve mentoring others - there's an endless list of things that can be added.</p><p>A practical way to gauge seniority, although subjectively, can be to simply ask the people around you, if you are senior.</p><p>You can also read up online - for example this excellent <a href="https://noidea.dog/glue">talk </a>which I have seen in person.</p><p>Despite people's best efforts to define all of this, the process to progression to senior is still quite a black box at most companies. It's not a secret to talk about the process, but it's certainly mysterious and secret-like.</p><p>In an industry such as software, the stereotype of a rock star ninja developer probably hinders defining what a good developer actually is. There are stories of places where people really had to really "prove themselves" to get to the next level of seniority. That sounds like a recipe for burn out.</p><p>Moreover, the people making decisions may not be experienced enough. Red flags for this include:</p><ul><li>being told that you should not care about titles, as it should not be your main pursuit in your career.</li><li>being told that others haven't been promoted so the process is unclear.</li><li>being told that apparently many people do not meet it their title once they achieve it - which sounds like an error on the part of management. This is all beginning to sound like an industry wide problem where two levels of engineer are simply not enough. Some places have more granular structures which seem more reasonable. </li><li>if a role is not met, then apparently company "performance improvement plans" can have legal implications like an employee having to meet certain performance by a set date, to risk not being let go. </li></ul><p>On a more positive note, to define "senior" you can reach out to the people team for guidance, but it should be your manager's job to know this.</p><p>In the absence of people knowing things, you can reach out to mentors, peers and contacts for their opinion. </p><p>If you aren't getting clear answers and are past the usual timeframe, then by process of elimination there are two options: </p><ul><li>something is extremely wrong with your performance, and you should have been told if this is the case</li><li>or, the standards are unknown and/or possibly too high. If the standards are unknown, they should be made clear to you.</li></ul><p>Or possibly a third option - that you are on your way for a title change. Since generally nobody will directly tell you that you are getting a promotion. Although at some companies this does happen.</p><p>From all of this analysis and reflection, all that can truly be said is that everything is subjective.</p><p>But that is a bit bleak. There is hope, if you have a decent team, to define various things, like for example via career matrices, or breakdowns like in this <a href="https://github.com/OctopusDeploy/People/blob/main/Software-Engineering/L3-Senior-Software-Engineer.md">link</a>, which are structured and detailed. Details should always be made clear, to make the process of acquiring experience less subjective, and more objective.</p>]]></content:encoded></item></channel></rss>