Los Techies http://feed.informer.com/digests/ZWDBOR7GBI/feeder Los Techies Respective post owners and feed distributors Thu, 07 Jul 2022 16:23:30 +0000 Feed Informer http://feed.informer.com/ AutoMapper and MediatR Roadmaps https://www.jimmybogard.com/automapper-and-mediatr-roadmaps/ Jimmy Bogard urn:uuid:e327d691-ad42-028b-c135-84474ca6add1 Tue, 08 Jul 2025 15:15:32 +0000 <p>One of my main goals of commercialization of AutoMapper and MediatR was being able to finally invest time in these projects where basically all new work stopped when I lost corporate sponsorship. I wanted to take some time to share where I&apos;d like to take these projects now</p> <p>One of my main goals of commercialization of AutoMapper and MediatR was being able to finally invest time in these projects where basically all new work stopped when I lost corporate sponsorship. I wanted to take some time to share where I&apos;d like to take these projects now that I have that sponsorship back.</p><h3 id="tracking-official-net-support">Tracking official .NET support</h3><p>Firstly, the latest releases bring back <code>netstandard2.0</code> support for both AutoMapper and MediatR which had dropped both years ago. MediatR was actually still on <code>net6.0</code> prior to this release which was already out of support for months.</p><p>It wasn&apos;t exactly easy, especially because of how much <code>net8.0</code> and <code>net9.0</code> have diverged from <code>netstandard2.0</code> not just in terms of APIs but C# language features, but having been part of several ASP.NET 4.x to ASP.NET Core migrations, having <code>netstandard2.0</code> support makes this transition quite a bit easier. In the past we&apos;d have to conditionally reference packages because there was no longer a common package version between say .NET 8 and .NET 4.8. That&apos;s something that I wish I had before that now I do.</p><h3 id="automapper-roadmap">AutoMapper Roadmap</h3><p>One of the biggest complaints I hear about AutoMapper is that it&apos;s hard to debug - you&apos;re trading compile-time errors for runtime exceptions. We spent a LOT of time baking in better exception handling into the expression trees generated (resulting in worse performance, but better diagnostics), but that isn&apos;t always enough.</p><p>The answer here is <strong>source generators</strong>, but I&apos;m not interested in merely copying other library&apos;s approaches. What I want to target is source generators that:</p><ul><li>Plug in to AutoMapper&apos;s rich extensibility model</li><li>Stay true to AutoMapper&apos;s <a href="https://www.jimmybogard.com/automappers-design-philosophy/" rel="noreferrer">design philosophy</a> </li><li>Support IQueryables (my favorite feature)</li><li>Track the features of AutoMapper&apos;s in-memory mapping</li><li>Support mapping validation (critical for any mapping tool)</li></ul><p>Debuggability is my main focus here, although obviously performance would be a secondary win. Source generators have come a LONG way since I first looked at them when they were first released, so I&apos;m excited to extend AutoMapper&apos;s functionality in this area.</p><p>This one is pretty big, so that&apos;s going to be my focus initially.</p><h3 id="mediatr-roadmap">MediatR Roadmap</h3><p>Some folks have asked or even pointed to other libraries that do source generation of basically a copy of MediatR&apos;s API. I am looking at that, but there&apos;s been quite a few things on MediatR&apos;s backlog that I want to look at first. Source generation in mediators I find a bit less interesting in real-world projects, outside of philosophical debates.</p><p>MediatR is commonly used in concert with <a href="https://www.jimmybogard.com/vertical-slice-architecture/" rel="noreferrer">Vertical Slice Architecture</a>, and a number of its features came out of using it in these scenarios (like behaviors). Today, a lot of features are tied in to the feature set of the stock Microsoft DI container. Unfortunately, features are only really added to that container if the ASP.NET Core team needs them. Even my PR to support generic constraints took like 5 years to merge in.</p><p>Moving away from relying on those DI features would mean I could do much more interesting things in the &quot;application use case pipeline&quot; that aren&apos;t possible with C#/DI alone, like:</p><ul><li><strong>Applying behaviors based on customized policies</strong></li><li><strong>Baking in support for result patterns</strong></li><li><strong>Direct support for application use cases</strong><ul><li>Blazor (sending a request from the client to a handler on the server)</li><li>Minimal APIs (scaffolding to separate API logic from application logic)</li><li>Domain events via notifications and EF/other ORMs</li></ul></li></ul><p>The idea of behaviors came from reviewing many production systems using MediatR and folding in that into first-class features. I am going to continue on this track.</p><p>What else are you interested in?</p> AutoMapper and MediatR Commercial Editions Launch Today https://www.jimmybogard.com/automapper-and-mediatr-commercial-editions-launch-today/ Jimmy Bogard urn:uuid:ea5d8cde-72ca-3b22-24cb-0be19ceb48af Wed, 02 Jul 2025 15:00:12 +0000 <p>Today I&apos;m excited to announce the official launch and release of the commercial editions of AutoMapper and MediatR. Both of these libraries have moved under their new corporate owner (me), <a href="https://luckypennysoftware.com/?ref=jimmybogard.com" rel="noreferrer">Lucky Penny Software</a>. I formed this company to house these projects separate from my consulting company, but it&</p> <p>Today I&apos;m excited to announce the official launch and release of the commercial editions of AutoMapper and MediatR. Both of these libraries have moved under their new corporate owner (me), <a href="https://luckypennysoftware.com/?ref=jimmybogard.com" rel="noreferrer">Lucky Penny Software</a>. I formed this company to house these projects separate from my consulting company, but it&apos;s just me there, I&apos;m the sole corporate overlord.</p><p>The GitHub repositories have transferred to the new GitHub organization (along with their ownership) here:</p><ul><li><a href="https://github.com/luckypennysoftware/automapper?ref=jimmybogard.com" rel="noreferrer">LuckyPennySoftware/AutoMapper</a></li><li><a href="https://github.com/luckypennysoftware/mediatr?ref=jimmybogard.com" rel="noreferrer">LuckyPennySoftware/MediatR</a></li></ul><p>With these, I&apos;ve launched new home pages for each library:</p><ul><li><a href="https://automapper.io/?ref=jimmybogard.com" rel="noreferrer">https://automapper.io</a></li><li><a href="https://mediatr.io/?ref=jimmybogard.com" rel="noreferrer">https://mediatr.io</a></li></ul><p>As well as a storefront site to purchase and manage licenses at:</p><ul><li><a href="https://luckypennysoftware.com/?ref=jimmybogard.com" rel="noreferrer">https://luckypennysoftware.com</a></li></ul><p>It&apos;s quite a bit to dig in to, so let&apos;s go over the details!</p><h3 id="whats-the-new-license">What&apos;s the new license?</h3><p>As <a href="https://www.jimmybogard.com/automapper-and-mediatr-licensing-update/" rel="noreferrer">discussed before</a>, I wanted to release these libraries under a <a href="https://github.com/LuckyPennySoftware/AutoMapper/blob/master/LICENSE.md?ref=jimmybogard.com" rel="noreferrer">dual-license model</a>:</p><ul><li><a href="https://opensource.org/license/rpl-1-5/?ref=jimmybogard.com" rel="noreferrer">Reciprocal Public License 1.5 (RPL1.5)</a></li><li><a href="https://luckypennysoftware.com/license?ref=jimmybogard.com" rel="noreferrer">Lucky Penny Software Commercial License</a></li></ul><p>It&apos;s a common dual-license model that many other OSS companies have chosen (MongoDB etc.) and had success with.</p><p>Under the commercial license, I&apos;ve created a <strong>tier-based licensing model</strong> based on <strong>team size</strong>. There are <strong>no individual per-seat licenses</strong>, only licensing based on the number of developers.</p><h3 id="how-much-will-it-cost">How much will it cost?</h3><p>With a tier-based pricing approach, I wanted a pricing model that scales with team size and allows for company growth without a lot of hassle. There are 3 paid tiers:</p><ul><li>Standard - <strong>1-10 developers</strong></li><li>Professional - <strong>11-50 developers</strong></li><li>Enterprise - <strong>Unlimited developers</strong></li></ul><p>Pricing is a <strong>subscription model</strong>, with both monthly and annual subscriptions (with a discount for annual subscriptions), as well as an option to <strong>bundle both libraries</strong> at a discount. You can find the details here (with all options), priced to your currency or in USD:</p><ul><li><a href="https://automapper.io/?ref=jimmybogard.com#pricing" rel="noreferrer">AutoMapper pricing</a></li><li><a href="https://mediatr.io/?ref=jimmybogard.com#pricing" rel="noreferrer">MediatR pricing</a></li></ul><p>You can also find the details of what subscription benefits you&apos;ll get at the links above, including:</p><ul><li>Private Discord channels</li><li>Priority support</li><li>Early access to new releases</li><li>Support for all currently supported versions of .NET Framework 4.x and .NET (<code>netstandard2.0</code>, <code>net8.0</code>, <code>net9.0</code>)</li><li>And more (as I build it)</li></ul><p>All subscription payments are managed through <a href="https://www.paddle.com/?ref=jimmybogard.com" rel="noreferrer">Paddle</a>, which supports...many different countries, currencies, and payment providers.</p><h3 id="do-you-have-free-licenses-for-insert-situation-here">Do you have free licenses for &lt;insert situation here&gt;? </h3><p>Yes! Besides the RPL license, I&apos;m also including a <strong>Community edition</strong> under the Commercial license that is <strong>free</strong> for:</p><ul><li>Companies and individuals <strong>under $5,000,000</strong> in gross annual revenue</li><li>Non-profits <strong>under $5,000,000</strong> in annual total budget (expenditure)</li><li>Educational/classroom use</li><li>Non-production environments</li></ul><p>You&apos;re still required to register for a license key, but this is only for auditing purposes.</p><h3 id="how-do-i-get-the-commercial-versions">How do I get the commercial versions?</h3><p>To make everyone&apos;s lives easier, these new major versions of AutoMapper and MediatR on NuGet are released under the new dual license agreement:</p><ul><li><a href="https://www.nuget.org/packages/AutoMapper/15.0.0?ref=jimmybogard.com" rel="noreferrer">AutoMapper v15.0</a></li><li><a href="https://www.nuget.org/packages/MediatR/13.0.0?ref=jimmybogard.com" rel="noreferrer">MediatR v13.0</a></li></ul><p>When you install these versions, you&apos;ll now be prompted for license acceptance. Once you obtain a license key, you&apos;ll be able to set the license key as:</p><pre><code class="language-csharp">services.AddAutoMapper(cfg =&gt; /* or AddMediatR */ cfg.LicenseKey = &quot;&lt;License key here&gt;&quot;; });</code></pre><p>I don&apos;t restrict usage of these products with a missing/invalid/expired license key, but you&apos;ll see some messages in your logs prompting you to supply a valid key.</p><h3 id="what-about-the-existing-versions">What about the existing versions?</h3><p>I&apos;ve created archived versions of the final releases of these two libraries:</p><ul><li><a href="https://github.com/automapper/automapper.archive?ref=jimmybogard.com" rel="noreferrer">AutoMapper/AutoMapper.Archive</a></li><li><a href="https://github.com/jbogard/mediatr.archive?ref=jimmybogard.com" rel="noreferrer">jbogard/MediatR.Archive</a></li></ul><p>Per those existing license agreements, you&apos;re free to fork, download, print out and read by the fireplace. Those archives will live on for anyone to use as they like.</p><p>If you&apos;re an existing user, you don&apos;t need to do anything. The existing NuGet packages (prior to the major versions listed above) are bound by the license agreements at the time of their release and will also live on.</p><h3 id="why-lucky-penny">Why Lucky Penny?</h3><p>Because she was my first dog! Although she&apos;s no longer with us anymore, I loved her spunk and her spirit and wanted to honor her memory with my company name (and logo). Here she is judging, always judging:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.jimmybogard.com/content/images/2025/07/DSC00167-copy.jpeg" class="kg-image" alt loading="lazy" width="1015" height="968" srcset="https://www.jimmybogard.com/content/images/size/w600/2025/07/DSC00167-copy.jpeg 600w, https://www.jimmybogard.com/content/images/size/w1000/2025/07/DSC00167-copy.jpeg 1000w, https://www.jimmybogard.com/content/images/2025/07/DSC00167-copy.jpeg 1015w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">Penny the dog</span></figcaption></figure><p>I named her Penny because 1) she was found by the side of a busy highway miles from anywhere (lucky for both of us) and 2) her copper color. So, Lucky Penny Software!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.jimmybogard.com/content/images/2025/07/logo_wide.png" class="kg-image" alt loading="lazy" width="270" height="100"><figcaption><span style="white-space: pre-wrap;">Lucky Penny Software logo</span></figcaption></figure><p>It&apos;s been a long journey to get here but I&apos;m excited about what the future holds for these libraries that have amassed more than 1.1 billion downloads. Thanks everyone for your patience and support as I worked to launch!</p> AutoMapper and MediatR Licensing Update https://www.jimmybogard.com/automapper-and-mediatr-licensing-update/ Jimmy Bogard urn:uuid:d3c14382-1587-c8e6-e479-54c71c45f0ea Wed, 16 Apr 2025 05:15:28 +0000 <p>In my <a href="https://www.jimmybogard.com/automapper-and-mediatr-going-commercial/" rel="noreferrer">last post</a>, I shared the news that I&apos;ve decided to take a commercialization route for AutoMapper and MediatR to ensure their long-term success. While that post was heavy on the motivation, it was intentionally light on the details. I did share that I wanted to be</p> <p>In my <a href="https://www.jimmybogard.com/automapper-and-mediatr-going-commercial/" rel="noreferrer">last post</a>, I shared the news that I&apos;ve decided to take a commercialization route for AutoMapper and MediatR to ensure their long-term success. While that post was heavy on the motivation, it was intentionally light on the details. I did share that I wanted to be transparent on that process, and this post is part of that transparency.</p><p>There is a TON of information out there on possible models for sustainable open source, such as:</p><ul><li>Consulting services</li><li>Open core</li><li>Hosted services</li><li>Dual license</li><li>a dozen others</li></ul><p>Of course besides my previous situation, &quot;be fortunate enough to work at a place that values and directly sponsors your work.&quot; This is the easiest place to be, but for projects that reach some threshold of users/downloads/complexity, maintainers must rely on sponsorship in some form or fashion. And when that sponsorship goes away for whatever reason, well, here we are.</p><p>Of the many options available, the most viable option is to <strong>move AutoMapper and MediatR to a </strong><a href="https://en.wikipedia.org/wiki/Multi-licensing?ref=jimmybogard.com" rel="noreferrer"><strong>dual license model</strong></a>.<strong> </strong>This looks to be the best choice after carefully examining the options and consulting with many other OSS maintainers who have already made this journey.</p><h3 id="dual-licensing-model">Dual Licensing Model</h3><p>When I first started thinking about how I might go about this, I asked myself, &quot;who bears the most responsibility in ensuring the sustainability of the OSS projects on which they depend?&quot; which is a long winded way of saying &quot;who should pay?&quot; But another way of thinking of this is &quot;who should NOT pay?&quot; Looking at how others do this as well as how I want to approach it, <strong>I want to make these libraries free for</strong>:</p><ul><li>Developers using it in an OSS setting</li><li>Individuals/students/hobbyists (using AutoMapper for fun not profit)</li><li>Non-profit/charities (maybe not for fun but also not for profit)</li><li>Startups or small companies (below some revenue/funding threshold)</li><li>Non-commercial setting (this I&apos;m not sure is absolutely necessary with the other categories)</li><li>Non-production environments (instead of any trial period etc.)</li></ul><p>I don&apos;t know if this exact verbiage is what will be the end result, but this is my overall goal.</p><p>Then for who I&apos;m targeting for paid for-licenses, it&apos;s <strong>for-profit businesses using these libraries for commercial activities</strong>. Looking at my clients over the years who&apos;ve used my libraries, it&apos;s a mix of these free/commercial categories.</p><p>In terms of a model for commercial licensing, I want to ensure that paid licenses add value beyond &quot;I can download the license.&quot; This is the more fun part of this exercise for me, where I can try the things I never really could before without a more direct form of sponsorship/funding. I have a lot of ideas here, but nothing ready to share yet. If <em>you</em> have an idea of &quot;if my company paid for a license, what else would I want to have included?&quot; I would love to hear about it!</p><p>I am looking at a <strong>tiered license model</strong> but <strong>no per-seat licenses</strong>. I don&apos;t want to charge individual developers anything&#x2014;that seems like a pain for everyone involved and I&apos;m trying to keep things simple. &quot;A new developer gets onboarded and now we need a new license&quot; is too much for me to deal with and goes against the spirit of these libraries&#x2014;the benefit is to the entire team, regardless of the number of developers.</p><p>I don&apos;t know what those tiers will be exactly, I&apos;m figuring that out next. I do expect some blanket <strong>enterprise, site-wide licenses</strong> that hopefully makes everything simpler for everyone. I&apos;ve been on the other side of the table, getting licenses approved internally with clients, and I understand predictability and simplicity go a long way.</p><h3 id="thoughts-on-pricing">Thoughts on Pricing</h3><p>As for pricing, I don&apos;t have details yet, and probably won&apos;t until launch in the next couple months. Range-wise, it&apos;s hard to compare to other commercial or dual-licensed products out there, since I don&apos;t want to do any individual or per-seat license and that seems to be the norm. I am however keenly aware of how much tooling and library products cost as I have to pay for many of these myself.</p><p>But if I were to compare to the cost for a team of 10 or 50 or 100 for their IDEs, I would expect my commercial license price to be a fraction of that.</p><p>Thanks again to everyone that&apos;s reached out with kind words and support, and to the community for their patience while I figure things out.</p> AutoMapper and MediatR Going Commercial https://www.jimmybogard.com/automapper-and-mediatr-going-commercial/ Jimmy Bogard urn:uuid:ea67b22b-0a25-2ca8-1045-cdc1bf9192f5 Wed, 02 Apr 2025 13:00:12 +0000 <p>Yes, another one of &quot;those posts&quot;. But tl;dr:</p><p><strong>In order to ensure the long-term sustainability of my OSS projects, I will be commercializing AutoMapper and MediatR.</strong></p><p>I did not post this on April 1st for obvious reasons. But first a little background on how I got to</p> <p>Yes, another one of &quot;those posts&quot;. But tl;dr:</p><p><strong>In order to ensure the long-term sustainability of my OSS projects, I will be commercializing AutoMapper and MediatR.</strong></p><p>I did not post this on April 1st for obvious reasons. But first a little background on how I got to this point.</p><h3 id="how-i-got-here">How I Got Here</h3><p>These two projects originated at my time at Headspring, a consulting company I worked at for over 12 years. About 5 years ago, in January 2020, I decided to strike off on my own and give solo consulting a try. Although it was a scary leap, it&apos;s been more rewarding than I could have possibly hoped for, in <em>almost</em> every area.</p><p>The area that it didn&apos;t work out well, and not at all intentionally, was OSS work:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2025/04/image-1.png" class="kg-image" alt loading="lazy" width="914" height="472" srcset="https://www.jimmybogard.com/content/images/size/w600/2025/04/image-1.png 600w, https://www.jimmybogard.com/content/images/2025/04/image-1.png 914w" sizes="(min-width: 720px) 720px"></figure><p>You can see exactly where my contributions cratered and flat-lined. And that&apos;s just commits&#x2014;issues, PRs, discussions, all my time dried up. This wasn&apos;t the intention but was a natural side effect of me focusing on my consulting business.</p><p>At Headspring, my time on OSS was directly encouraged and sponsored by them. I could use time between projects to invest back in existing OSS or new OSS, because it benefited the client, the company, and the employees (me and my coworkers).</p><p>With me leaving that company, and that company then selling to Accenture later that year, I had no direct major sponsor of my OSS work anymore. My free time was being spent growing and ensuring the success of my consulting company, which being solo, is...kinda important.</p><p>Taking time to see how things have been going on all fronts, I had a bit of a shock looking at my OSS work. I realized that model is not sustainable for the long-term success of these projects, which I still endorse and believe in. I need to be able to pay for my time to work on these projects, and get direct feedback from paying clients, like I had earlier at Headspring.</p><h3 id="what-will-this-look-like">What Will This Look Like?</h3><p>The short answer is &quot;I don&apos;t know exactly&quot;. I&apos;m working out those details now and will share them when I figure it out. I have lots of examples of what does and doesn&apos;t work well, at least from my perspective, as well as what I consider will work well for these projects.</p><p>Short term, nothing will change. I&apos;ll still be as (un)responsive on GitHub issues, and I just pushed out a couple releases of any existing work.</p><p>My goal is to be able to pay for the time to spend actually improving these projects, building out communities, helping more users, and in general, doing the things that people have asked me MANY times over the years that I should do, but I didn&apos;t, because it was not my job. OSS was/is/never will be a hobby for me. I want to change it to at least be part of my job and to fund real work.</p><p>I can&#x2019;t rely on donations, I don&apos;t want to make developers pay anything or do anything to punish/annoy them, and I certainly don&apos;t think it&apos;s Microsoft&apos;s job to &quot;pay me the money.&quot; Past that, I&apos;m still figuring it out.</p><h3 id="when-will-this-happen">When Will This Happen?</h3><p>I don&apos;t know, it&apos;s still just me that owns everything. It&apos;s still using my free time to sort it out, as my day job is still a consultant. But I plan to be open with this whole process. I&apos;m sure I&apos;ll surprise someone but the goal here is to be transparent.</p><p>Personally, I&apos;m both filled with excitement and dread&#x2014;doing these projects for so long has been incredibly rewarding, especially as this is code that came directly out of many, many long-lived production-deployed projects at Headspring. But I don&apos;t want these projects to wither and die on the vine, I want them to grow and evolve and thrive. But not just these projects&#x2014;I want ALL my OSS projects (Respawn etc.) to thrive. This is how it needs to happen.</p><h3 id="final-thanks">Final Thanks</h3><p>Thanks to all that have contributed over the years, and especially to <a href="https://www.linkedin.com/in/lbargaoanu/?ref=jimmybogard.com" rel="noreferrer">Lucian Bargaoanu</a> who really helped pick up the torch with AutoMapper after I more or less fell off the map. Also thanks to my GitHub sponsors, as many a pint has been purchased with your generous support. And finally thanks to the community, I never hoped anything I built would help anyone beyond my clients, coworkers, and company, but it&apos;s always nice to hear that it has.</p> MediatR 12.5.0 Released https://www.jimmybogard.com/mediatr-12-5-0-released/ Jimmy Bogard urn:uuid:fc72edcb-d39a-2519-28e0-208cbad24a55 Tue, 01 Apr 2025 18:50:06 +0000 <p>I pushed out MediatR 12.5 today:</p><ul><li><a href="https://github.com/jbogard/MediatR/releases/tag/v12.5.0?ref=jimmybogard.com" rel="noreferrer">Release Notes</a></li><li><a href="https://www.nuget.org/packages/MediatR?ref=jimmybogard.com" rel="noreferrer">NuGet</a></li></ul><p>This is mainly a regular minor release with a couple extra interesting features:</p><ul><li><a href="https://github.com/jbogard/MediatR/pull/1065?ref=jimmybogard.com" rel="noreferrer">Adding convenience method to register open behaviors</a></li><li>Better cancellation token support (it&apos;s passed now everywhere including behaviors)</li></ul><p>And some other cleanup items as well. Enjoy!</p> <p>I pushed out MediatR 12.5 today:</p><ul><li><a href="https://github.com/jbogard/MediatR/releases/tag/v12.5.0?ref=jimmybogard.com" rel="noreferrer">Release Notes</a></li><li><a href="https://www.nuget.org/packages/MediatR?ref=jimmybogard.com" rel="noreferrer">NuGet</a></li></ul><p>This is mainly a regular minor release with a couple extra interesting features:</p><ul><li><a href="https://github.com/jbogard/MediatR/pull/1065?ref=jimmybogard.com" rel="noreferrer">Adding convenience method to register open behaviors</a></li><li>Better cancellation token support (it&apos;s passed now everywhere including behaviors)</li></ul><p>And some other cleanup items as well. Enjoy!</p> AutoMapper 14.0 Released https://www.jimmybogard.com/automapper-14-0-released/ Jimmy Bogard urn:uuid:064fc7c2-f054-0fef-5997-60e150011a79 Wed, 19 Feb 2025 13:41:24 +0000 <p>I pushed out version 14.0 (!) of AutoMapper over the weekend:</p><ul><li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v14.0.0?ref=jimmybogard.com" rel="noreferrer">Release notes</a></li><li><a href="https://www.nuget.org/packages/automapper/?ref=jimmybogard.com" rel="noreferrer">NuGet</a></li></ul><p>This release targets .NET 8 (up from .NET 6 from the previous release). It&apos;s mainly a bug fix release, with some quality-of-life improvements in configuration validation where we gather up all the possible validation</p> <p>I pushed out version 14.0 (!) of AutoMapper over the weekend:</p><ul><li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v14.0.0?ref=jimmybogard.com" rel="noreferrer">Release notes</a></li><li><a href="https://www.nuget.org/packages/automapper/?ref=jimmybogard.com" rel="noreferrer">NuGet</a></li></ul><p>This release targets .NET 8 (up from .NET 6 from the previous release). It&apos;s mainly a bug fix release, with some quality-of-life improvements in configuration validation where we gather up all the possible validation errors before reporting them in an aggregate exception.</p><p>Enjoy!</p> Integrating the Particular Service Platform with Aspire https://www.jimmybogard.com/integrating-the-particular-service-platform-with-aspire/ Jimmy Bogard urn:uuid:cfec7702-7b24-2b56-5170-6ecd6a62f692 Tue, 24 Sep 2024 22:48:57 +0000 <p>I&apos;ve been playing around with <a href="https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-overview?ref=jimmybogard.com" rel="noreferrer">Aspire</a> for a bit mainly to understand &quot;is this a thing I should care about?&quot; and part of what I wanted to do is take a complex &quot;hello world&quot; distributed system and convert it to Aspire. Along the way,</p> <p>I&apos;ve been playing around with <a href="https://learn.microsoft.com/en-us/dotnet/aspire/get-started/aspire-overview?ref=jimmybogard.com" rel="noreferrer">Aspire</a> for a bit mainly to understand &quot;is this a thing I should care about?&quot; and part of what I wanted to do is take a complex &quot;hello world&quot; distributed system and convert it to Aspire. Along the way, Particular Software also released container support for their <a href="https://particular.net/service-platform?ref=jimmybogard.com" rel="noreferrer">Service Platform</a>, so it also seemed like a good opportunity to try it out.</p><p>I&apos;ll follow up in another post about Aspire impressions, but the NServiceBus part was actually relatively simple. Many Aspire integrations have some kind of 1st-party support where you can do things like:</p><pre><code class="language-csharp">var rmqPassword = builder.AddParameter(&quot;messaging-password&quot;); var dbPassword = builder.AddParameter(&quot;db-password&quot;); var broker = builder.AddRabbitMQ(name: &quot;broker&quot;, password: rmqPassword, port: 5672) .WithDataVolume() .WithManagementPlugin() .WithEndpoint(&quot;management&quot;, e =&gt; e.Port = 15672) .WithHealthCheck(); var mongo = builder.AddMongoDB(&quot;mongo&quot;); var sql = builder.AddSqlServer(&quot;sql&quot;, password: dbPassword) .WithHealthCheck() .WithDataVolume() .AddDatabase(&quot;sqldata&quot;);</code></pre><p>And now my system has RabbitMQ, MongoDB, and SQL Server up and running in containers. There&apos;s a lot of stock configuration going on behind <code>AddSqlServer</code> and similar methods but we don&apos;t <em>have</em> to use those convenience methods if we don&apos;t want to.</p><p>The overall Service Platform architecture looks something like:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/09/image-1.png" class="kg-image" alt loading="lazy" width="2000" height="1136" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/09/image-1.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/09/image-1.png 1000w, https://www.jimmybogard.com/content/images/size/w1600/2024/09/image-1.png 1600w, https://www.jimmybogard.com/content/images/2024/09/image-1.png 2352w" sizes="(min-width: 720px) 720px"></figure><p>The &quot;instances&quot; here are running containers that we need to configure in Aspire. On top of that, we might also want to have Service Pulse (another container) and Service Insight (a Windows-only WPF app) running, and these all require extra configuration. Also, the Error and Audit instances use RavenDB as their backing store but Particular also has an image there. The <a href="https://hub.docker.com/r/particular/servicecontrol?ref=jimmybogard.com" rel="noreferrer">Docker Hub site</a> has links to docs on both the instance and containers.</p><p>First up, we need to provide our license to the running containers as raw text in an environment variable, so we&apos;ll just read our license (this is just for local development):</p><pre><code class="language-csharp">var license = File.ReadAllText( Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), &quot;ParticularSoftware&quot;, &quot;license.xml&quot;)); </code></pre><p>Next, we need our RavenDB instance. There&apos;s a special image from Particular, so we&apos;ll use the <code>AddContainer</code> method to add our custom image to our Aspire distributed application:</p><pre><code class="language-csharp">builder .AddContainer(&quot;servicecontroldb&quot;, &quot;particular/servicecontrol-ravendb&quot;, &quot;latest&quot;) .WithBindMount(&quot;AppHost-servicecontroldb-data&quot;, &quot;/opt/RavenDB/Server/RavenData&quot;) .WithEndpoint(8080, 8080); </code></pre><p>The container docs say that we must mount a persistent volume to that path, so we use the <code>WithBindMount</code> method to mount the volume following the <a href="https://learn.microsoft.com/en-us/dotnet/aspire/fundamentals/persist-data-volumes?ref=jimmybogard.com#understand-volumes" rel="noreferrer">Aspire docs</a>.</p><p>Next up are the Particular containers!</p><h3 id="setting-up-the-service-control-error-instance">Setting up the Service Control Error instance</h3><p>From the <a href="https://docs.particular.net/servicecontrol/servicecontrol-instances/deployment/containers?ref=jimmybogard.com" rel="noreferrer">Particular docs</a>, we see that we need to supply configuration for:</p><ul><li>Transport type (RabbitMQ, Azure Service Bus, etc.)</li><li>Connection string to the transport</li><li>Connection string to the Raven DB instance</li><li>Audit instance URLs</li><li>License</li></ul><p>Plus port mapping. Pretty quickly I ran into a few challenges:</p><ul><li>The Service Control image can start before RabbitMQ is &quot;ready&quot;, resulting in connection failures</li><li>Service Insight, the WPF app, is Windows only so I need to connect to Service Control from a VM</li></ul><p>The base configuration is fairly straightforward, we specify the container and image, with environment variables:</p><pre><code class="language-csharp">builder .AddContainer(&quot;servicecontrol&quot;, &quot;particular/servicecontrol&quot;) .WithEnvironment(&quot;TransportType&quot;, &quot;RabbitMQ.QuorumConventionalRouting&quot;) .WithEnvironment(&quot;ConnectionString&quot;, &quot;host=host.docker.internal&quot;) .WithEnvironment(&quot;RavenDB_ConnectionString&quot;, &quot;http://host.docker.internal:8080&quot;) .WithEnvironment(&quot;RemoteInstances&quot;, &quot;[{\&quot;api_uri\&quot;:\&quot;http://host.docker.internal:44444/api\&quot;}]&quot;) .WithEnvironment(&quot;PARTICULARSOFTWARE_LICENSE&quot;, license) .WithArgs(&quot;--setup-and-run&quot;) </code></pre><p>But the other two challenges are a bit harder to deal with. There is no built-in way in Aspire to &quot;wait&quot; for other resources to start. This isn&apos;t new to Aspire - in the past we had to write custom hooks in Docker Compose to wait for our dependencies&apos; health checks to come back. The extensibility is there to do such a thing, so I found an <a href="https://nikiforovall.github.io/dotnet/aspire/2024/06/28/startup-dependencies-aspire.html?ref=jimmybogard.com" rel="noreferrer">extension to do just that</a>.</p><p>The second problem was...a long slog to figure out. It&apos;s possible to have a Parallels VM be able to communicate with Docker containers <a href="https://samestuffdifferentday.net/2024/08/02/working-in-parallels-and-docker-on-host/?ref=jimmybogard.com" rel="noreferrer">running in the Mac host</a>. However, I could <strong>not</strong> get this to work with Aspire. After doing side-by-side comparisons between container manifests running inside/outside of Aspire, I found the culprit:</p><pre><code class="language-diff">&quot;PortBindings&quot;: { &quot;8080/tcp&quot;: [ { - &quot;HostIp&quot;: &quot;&quot;, + &quot;HostIp&quot;: &quot;127.0.0.1&quot;, &quot;HostPort&quot;: &quot;8000&quot; } ] }, </code></pre><p>With the Docker CLI, doing <code>-p 8080:8000</code> does not set the host IP. Aspire does however, which means I can only access this container via <code>localhost</code>. Not ideal because my Windows VM is definitely not able to access that. Instead of using <code>WithEndpoint</code> or similar, I have to drop down to container runtime args:</p><pre><code class="language-csharp">.WithContainerRuntimeArgs(&quot;-p&quot;, &quot;33333:33333&quot;) .WaitFor(rabbitMqResource); </code></pre><p>Now my Service Control instance is up and running!</p><h3 id="setting-up-service-control-audit-monitoring-and-service-pulse">Setting up Service Control Audit, Monitoring, and Service Pulse</h3><p>Following our previous example, we can finish out our configuration for the other container instances:</p><pre><code class="language-csharp">builder .AddContainer(&quot;servicecontrolaudit&quot;, &quot;particular/servicecontrol-audit&quot;) .WithEnvironment(&quot;TransportType&quot;, &quot;RabbitMQ.QuorumConventionalRouting&quot;) .WithEnvironment(&quot;ConnectionString&quot;, &quot;host=host.docker.internal&quot;) .WithEnvironment(&quot;RavenDB_ConnectionString&quot;, &quot;http://host.docker.internal:8080&quot;) .WithEnvironment(&quot;PARTICULARSOFTWARE_LICENSE&quot;, license) .WithArgs(&quot;--setup-and-run&quot;) .WithEndpoint(44444, 44444) .WaitFor(rabbitMqResource); builder .AddContainer(&quot;servicecontrolmonitoring&quot;, &quot;particular/servicecontrol-monitoring&quot;) .WithEnvironment(&quot;TransportType&quot;, &quot;RabbitMQ.QuorumConventionalRouting&quot;) .WithEnvironment(&quot;ConnectionString&quot;, &quot;host=host.docker.internal&quot;) .WithEnvironment(&quot;PARTICULARSOFTWARE_LICENSE&quot;, license) .WithArgs(&quot;--setup-and-run&quot;) .WithEndpoint(33633, 33633) .WaitFor(rabbitMqResource); builder .AddContainer(&quot;servicepulse&quot;, &quot;particular/servicepulse&quot;) .WithEnvironment(&quot;SERVICECONTROL_URL&quot;, &quot;http://host.docker.internal:33333&quot;) .WithEnvironment(&quot;MONITORING_URL&quot;, &quot;http://host.docker.internal:33633&quot;) .WithEnvironment(&quot;PARTICULARSOFTWARE_LICENSE&quot;, license) .WithEndpoint(9090, 9090) .WaitFor(rabbitMqResource); </code></pre><p>With all this in place in my Service Pulse instance is up and running:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/09/image-2.png" class="kg-image" alt loading="lazy" width="1964" height="886" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/09/image-2.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/09/image-2.png 1000w, https://www.jimmybogard.com/content/images/size/w1600/2024/09/image-2.png 1600w, https://www.jimmybogard.com/content/images/2024/09/image-2.png 1964w" sizes="(min-width: 720px) 720px"></figure><p>And on the Service Insight side, I had to do the Parallels trick of using my hosts file to create a special &quot;localhost.mac&quot; entry to point to the Mac host:</p><pre><code>10.211.55.2 localhost.mac</code></pre><p>With this in place, I can configure Service Insight in Windows to connect to the Docker Service Pulse instance running in Docker on the Mac:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/09/image-3.png" class="kg-image" alt loading="lazy" width="662" height="264" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/09/image-3.png 600w, https://www.jimmybogard.com/content/images/2024/09/image-3.png 662w"></figure><p>All my NServiceBus messages and traces now show up just fine:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/09/image-4.png" class="kg-image" alt loading="lazy" width="2000" height="1483" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/09/image-4.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/09/image-4.png 1000w, https://www.jimmybogard.com/content/images/size/w1600/2024/09/image-4.png 1600w, https://www.jimmybogard.com/content/images/2024/09/image-4.png 2266w" sizes="(min-width: 720px) 720px"></figure><p>Most of the work I had to do was not really Aspire-related, but just configuring Aspire to pass in the appropriate configuration to the containers. You can find the full code to my configuration here:</p><p><a href="https://github.com/jbogard/nsb-diagnostics-poc?ref=jimmybogard.com" rel="noreferrer">Code Example</a></p><p>Enjoy!</p> Tales from the .NET Migration Trenches - Turning Off the Lights https://www.jimmybogard.com/tales-from-the-net-migration-trenches-turning-off-the-lights/ Jimmy Bogard urn:uuid:ccf8129e-f722-7635-4846-e2324d7a66fa Thu, 05 Sep 2024 15:25:16 +0000 <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/" rel="noreferrer">Authentication</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-middleware/" rel="noreferrer">Middleware</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-turning-off-the-lights/" rel="noreferrer">Turning Off the Lights</a></li></ul><p>In the last post, we looked at migrating our middleware, which we tackle in an as-needed basis. When a controller needs</p> <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/" rel="noreferrer">Authentication</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-middleware/" rel="noreferrer">Middleware</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-turning-off-the-lights/" rel="noreferrer">Turning Off the Lights</a></li></ul><p>In the last post, we looked at migrating our middleware, which we tackle in an as-needed basis. When a controller needs middleware to be migrated, we migrate that middleware over. If the entire app needs the middleware, it needs to come rather early.</p><p>Once we migrate much of our middleware over, it becomes much less work to incrementally migrate individual controllers and their subsequent actions/pages over. I won&apos;t go into deep detail into this part - mostly it&apos;s fixing namespaces, adjusting features (such as converting child actions into view components), but it can go <em>quite</em> fast. On recent teams I was working with, we migrated easily a dozen controllers a week amongst 3-4 developers. At this point, the bottleneck wasn&apos;t the conversion, but testing to make sure the pages still worked correctly</p><p>It&apos;s essentially testing the entire application, one page at a time, so hopefully you&apos;ve got some regression tests in some form or fashion. I&apos;m not skipping the incremental controller migration because it&apos;s not interesting - it&apos;s just because our teams really didn&apos;t encounter many challenges there. There will be <em>something</em> that comes up, there always is, but just the controller/action/view part is not too terrible.</p><p>But in this post I wanted to focus on getting to the end - what do we do once we&apos;ve migrated everything but authentication? When there&apos;s just one controller left, we&apos;re now OK to proceed with migrating the last pieces over and &quot;turning off the lights&quot; on the .NET 4.x application.</p><h3 id="migrating-last-features">Migrating Last Features</h3><p>The last (or next-to-last) migration typically:</p><ul><li>Migrates the last controller, usually authentication</li><li>Turns off proxying and all remote app features</li></ul><p>You don&apos;t necessarily need to split this into two separate units of work/deployments, as once you&apos;ve migrated the last set of requests you can migrate all final features from the .NET Framework application. If the last controller is authentication, we&apos;ll also need to remove remote authentication. Our current web adapter configuration before final migration is:</p><pre><code class="language-csharp">builder.Services.AddSystemWebAdapters() .AddJsonSessionSerializer(options =&gt; { options.RegisterKey&lt;string&gt;(&quot;FavoriteInstructor&quot;); }) .AddRemoteAppClient(options =&gt; { // Provide the URL for the remote app that has enabled session querying options.RemoteAppUrl = new(builder.Configuration[&quot;ProxyTo&quot;]); // Provide a strong API key that will be used to authenticate the request on the remote app for querying the session options.ApiKey = builder.Configuration[&quot;RemoteAppApiKey&quot;]; }) .AddAuthenticationClient(true) .AddSessionClient(); builder.Services.AddHttpForwarder();</code></pre><p>With middleware:</p><pre><code class="language-csharp">app.UseSystemWebAdapters(); app.MapDefaultControllerRoute(); app.MapForwarder(&quot;/{**catch-all}&quot;, app.Configuration[&quot;ProxyTo&quot;]).Add( static builder =&gt; ((RouteEndpointBuilder)builder).Order = int.MaxValue);</code></pre><p>Along with migrating the authentication piece and all related middleware, we&apos;ll remove the above from our application startup, as well as the package references to all the proxy and System.WebAdapters packages. Once that&apos;s complete, our .NET application should now handle <em>all</em> requests. There might still be a few extra features to enable in .NET 8, such as Session:</p><pre><code class="language-csharp">builder.Services.AddSession(); // later app.UseSession();</code></pre><p>With all that complete, our .NET 8 application should now serve all requests and host all features needed to run our entire system.</p><h3 id="turning-off-the-lights">Turning off the lights</h3><p>While our .NET 8 application may now be &quot;complete&quot;, we&apos;re not quite done yet. In my typical last phase we will:</p><ul><li>Deploy the completed .NET 8 application to production</li><li>Monitor for any errors and any activity from the .NET 4.8 application</li><li>Adjust our .NET 8 application as necessary</li></ul><p>If we don&apos;t see any issues, then the final <em>final</em> cleanup is:</p><ul><li>Remove all .NET 4.8 code from the repository</li><li>Remove any shims to bridge from .NET 8 to .NET 4.8</li><li>Remove all .NET 4.8 application pipelines and deployments</li><li>Remove all .NET 4.8 production resources</li></ul><p>And we should end with something like:</p><figure class="kg-card kg-image-card"><a href="https://x.com/jbogard/status/1813667695289933872?ref=jimmybogard.com"><img src="https://www.jimmybogard.com/content/images/2024/09/image.png" class="kg-image" alt loading="lazy" width="1172" height="664" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/09/image.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/09/image.png 1000w, https://www.jimmybogard.com/content/images/2024/09/image.png 1172w" sizes="(min-width: 720px) 720px"></a></figure><p>So what&apos;s next? There&apos;s still probably quite a bit to do to &quot;.NET-8-ify&quot; our existing system - all those architectural improvements we skipped in order to fast track migration. But most important - celebrate!</p> Upcoming Training on DDD with Vertical Slice Architecture in Munich https://www.jimmybogard.com/upcoming-training-on-ddd-with-vertical-slice-architecture-in-munich/ Jimmy Bogard urn:uuid:dc225ab4-6a2c-1ffc-01c6-ad5193bb0ff1 Wed, 28 Aug 2024 10:28:04 +0000 <p>I&apos;ve got another training event coming up focusing on Domain-Driven Design with Vertical Slice Architecture in Munich on October 21-23rd.</p><p>A little different than the previous times I&apos;ve given this course is an option for either a 2-day or 3-day version. I had received feedback that</p> <p>I&apos;ve got another training event coming up focusing on Domain-Driven Design with Vertical Slice Architecture in Munich on October 21-23rd.</p><p>A little different than the previous times I&apos;ve given this course is an option for either a 2-day or 3-day version. I had received feedback that folks were also interested in larger-scale design concepts such as bounded contexts, messaging, integration patterns, microservices, and modular monoliths. So I&apos;ve included a 3rd day that covers these topics, where we look at encapsulation and cohesion at larger and larger scopes.</p><p>We&apos;ll cover:</p><ul><li>Refactoring an existing system to leverage Vertical Slice Architecture</li><li>Applying Domain-Driven Design techniques to model complex business needs</li><li>Communication between slices</li><li>Exploring Validation and Testing (and other cross-cutting concerns) using Vertical Slice Architecture</li><li>Examining various design patterns, code smells, and refactoring techniques</li><li>Implementing the Vertical Slice Architectural pattern in various enterprise application scenarios (minimal APIs, Blazor, Web APIs, etc.)</li></ul><p>And on the final day:</p><ul><li>Service boundaries and bounded contexts</li><li>Communication between bounded contexts</li><li>Microservices and modular monoliths</li><li>Studying distributed systems patterns, tools, and libraries such as NServiceBus</li></ul><p>The course pulls together my experiences building such systems for nearly 20 years now. And if you can&apos;t make the course during the day, I&apos;m also hosting a networking event during the evening where you can meet myself and the other attendees and ask me questions. I hope to see you there!</p><p><a href="https://my.weezevent.com/domain-driven-design-with-vertical-slice-architecture-1?ref=jimmybogard.com" rel="noreferrer">Register Now</a></p> Tales from the .NET Migration Trenches - Middleware https://www.jimmybogard.com/tales-from-the-net-migration-trenches-middleware/ Jimmy Bogard urn:uuid:24920355-e152-74dd-c9a0-867d6a2aa09b Tue, 06 Aug 2024 15:46:39 +0000 <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/" rel="noreferrer">Authentication</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-middleware/" rel="noreferrer">Middleware</a></li></ul><p>In the last post, we looked at tackling probably the most important pieces of middleware - authentication. But many ASP.NET MVC 5 applications will</p> <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/" rel="noreferrer">Authentication</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-middleware/" rel="noreferrer">Middleware</a></li></ul><p>In the last post, we looked at tackling probably the most important pieces of middleware - authentication. But many ASP.NET MVC 5 applications will have lots of middleware, but not all of the middleware should be migrated without some analysis on whether or not that middleware is actually needed anymore.</p><p>This part is entirely dependent on your application - you might have little to no middleware, or lots. Middleware can also exist in a number of different places:</p><ul><li>Web.config (you WILL forget this)</li><li>Global.asax (probably calling into other classes with the middleware configuration)</li><li>OWIN Startup</li></ul><p>When choosing the first controller to migrate, I&apos;m also looking at which controllers have the least amount of middleware, just to minimize the heavy first lift.</p><p>Let&apos;s look at our various middleware, and see what makes sense to move over, starting with our <code>web.config</code>.</p><h3 id="migrating-webconfig">Migrating Web.Config</h3><p>I think I forget the web.config middleware mainly because I&apos;ve tried to burn most things ASP.NET from my brain. But we&apos;ll find lots of important hosting configuration settings in our web.config, from custom middleware to error handling, application configuration, server configuration and more. Luckily for us, nearly all out-of-the-box configuration has a direct analog in Kestrel. We mostly need to worry about anything custom here. My sample app doesn&apos;t have a lot going on:</p><pre><code class="language-xml">&lt;system.web&gt; &lt;compilation debug=&quot;true&quot; targetFramework=&quot;4.8.1&quot; /&gt; &lt;httpRuntime targetFramework=&quot;4.8.1&quot; /&gt; &lt;customErrors mode=&quot;RemoteOnly&quot; redirectMode=&quot;ResponseRewrite&quot;&gt; &lt;error statusCode=&quot;404&quot; redirect=&quot;/404Error.aspx&quot; /&gt; &lt;/customErrors&gt; &lt;!-- Glimpse: This can be commented in to add additional data to the Trace tab when using WebForms &lt;trace writeToDiagnosticsTrace=&quot;true&quot; enabled=&quot;true&quot; pageOutput=&quot;false&quot;/&gt; --&gt; &lt;httpModules&gt; &lt;add name=&quot;Glimpse&quot; type=&quot;Glimpse.AspNet.HttpModule, Glimpse.AspNet&quot; /&gt; &lt;/httpModules&gt; &lt;httpHandlers&gt; &lt;add path=&quot;glimpse.axd&quot; verb=&quot;GET&quot; type=&quot;Glimpse.AspNet.HttpHandler, Glimpse.AspNet&quot; /&gt; &lt;/httpHandlers&gt; &lt;/system.web&gt; </code></pre><p>We only have one set of custom modules/handlers and it&apos;s the now-dead (and much missed) Glimpse project. In the rest of the configuration, we only see custom errors redirecting to an .ASPX page, which we can easily port over using <a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/error-handling?view=aspnetcore-8.0&amp;ref=jimmybogard.com" rel="noreferrer">custom errors in ASP.NET Core</a>. Otherwise there&apos;s not much going on here.</p><p>In a typical application, the things I&apos;ve needed to migrate over might include such settings as:</p><ul><li>Authentication</li><li>Authorization</li><li>Cookies</li><li>Session state</li><li>Data protection</li><li>Static files</li><li>HTTP request methods</li><li>Initialization</li><li>Custom headers</li><li>Caching</li></ul><p>Each of these has some analog in ASP.NET Core Kestrel configuration. But luckily for us, we don&apos;t have any custom handlers/modules to worry about, only porting ASP.NET runtime features to ASP.NET Core.</p><h3 id="aspnet-mvc-5-middleware">ASP.NET MVC 5 Middleware</h3><p>Next up is ASP.NET MVC 5 middleware, which is typically set up in the <code>Global.asax.cs</code> file, something like:</p><pre><code class="language-csharp">AreaRegistration.RegisterAllAreas(); FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters); RouteConfig.RegisterRoutes(RouteTable.Routes); BundleConfig.RegisterBundles(BundleTable.Bundles); </code></pre><p>The global filters registered are:</p><pre><code class="language-csharp">public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleErrorAttribute()); filters.Add(new ValidatorActionFilter()); filters.Add(new MvcTransactionFilter()); } </code></pre><p>The first filter here is a built-in one from ASP.NET MVC to provide global error handling (with no extra configuration), but the second two are custom. The first custom filter provides some customization around handling validation errors and providing a common error result back to the UI:</p><pre><code class="language-csharp">public void OnActionExecuting(ActionExecutingContext filterContext) { if (!filterContext.Controller.ViewData.ModelState.IsValid) { if (filterContext.HttpContext.Request.HttpMethod == &quot;GET&quot;) { var result = new HttpStatusCodeResult(HttpStatusCode.BadRequest); filterContext.Result = result; } else { var result = new ContentResult(); string content = JsonConvert.SerializeObject(filterContext.Controller.ViewData.ModelState, new JsonSerializerSettings { ReferenceLoopHandling = ReferenceLoopHandling.Ignore }); result.Content = content; result.ContentType = &quot;application/json&quot;; filterContext.HttpContext.Response.StatusCode = 400; filterContext.Result = result; } } } </code></pre><p>The front end does still need this, so we want to port this over. The second filter provides automatic transaction handling:</p><pre><code class="language-csharp">public class MvcTransactionFilter : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { // Logger.Instance.Verbose(&quot;MvcTransactionFilter::OnActionExecuting&quot;); var context = StructuremapMvc.ParentScope.CurrentNestedContainer.GetInstance&lt;SchoolContext&gt;(); context.BeginTransaction(); } public override void OnActionExecuted(ActionExecutedContext filterContext) { // Logger.Instance.Verbose(&quot;MvcTransactionFilter::OnActionExecuted&quot;); var instance = StructuremapMvc.ParentScope.CurrentNestedContainer.GetInstance&lt;SchoolContext&gt;(); instance.CloseTransaction(filterContext.Exception); } } </code></pre><p>I might not do automatic transactions like this in a normal project but because the application code expects it, we&apos;ll need to migrate this over as well. The transaction filter is interesting because it highlights the shortcomings of ASP.NET MVC 5&apos;s dependency injection capabilities - namely there wasn&apos;t anything built in for filters. Instead of migrating this filter as-is, we need to translate to the equivalent ASP.NET Core filter:</p><pre><code class="language-csharp">public class DbContextTransactionFilter : IAsyncActionFilter { private readonly SchoolContext _dbContext; public DbContextTransactionFilter(SchoolContext dbContext) { _dbContext = dbContext; } public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next) { try { _dbContext.BeginTransaction(); var actionExecuted = await next(); if (actionExecuted.Exception != null &amp;&amp; !actionExecuted.ExceptionHandled) { _dbContext.CloseTransaction(actionExecuted.Exception); } else { _dbContext.CloseTransaction(); } } catch (Exception ex) { _dbContext.CloseTransaction(ex); throw; } } } </code></pre><p>And we register our filter:</p><pre><code class="language-csharp">builder.Services.AddControllersWithViews(opt =&gt; { opt.Filters.Add&lt;DbContextTransactionFilter&gt;(); });</code></pre><p>Now our filter will have its <code>DbContext</code> injected instead of going out to a custom extension to mimic per-request service lifetimes.</p><p>Finally, let&apos;s look at the OWIN middleware.</p><h3 id="owin-middleware">OWIN Middleware</h3><p>OWIN middleware can be found in classes with the <code>OwinStartup</code> attribute configured for them. Usually this is a &quot;Startup&quot; class but it could be anything. In my sample app, we have:</p><pre><code class="language-csharp">[assembly: OwinStartup(typeof(Startup))] namespace ContosoUniversity { public partial class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR(); GlobalConfiguration.Configuration .UseSqlServerStorage(&quot;SchoolContext&quot;) .UseStructureMapActivator(IoC.Container) ; app.UseHangfireDashboard(); app.UseHangfireServer(new BackgroundJobServerOptions { Queues = new[] { Queues.Default } }); ConfigureAuth(app); } } }</code></pre><p>Basically, it&apos;s:</p><ul><li>SignalR</li><li>Hangfire</li><li>Authentication</li></ul><p>Authentication might differ slightly than the ASP.NET authentication, so we&apos;ll want to port settings there. SignalR and Hangfire can be dealt with individually, but otherwise we don&apos;t have any custom OWIN middleware. This is fairly typical unless your application wholly relies on OWIN instead of say, IIS.</p><p>Middleware isn&apos;t the most exciting code to port over, but critical for ensuring our new application preserves the existing behavior of the .NET Framework application.</p><p>In our last post, we&apos;ll cover finishing up our migration and &quot;turning off the lights&quot;.</p> Vertical Slice Architecture Training Course in July in the Netherlands https://www.jimmybogard.com/vertical-slice-architecture-training-course-in-july/ Jimmy Bogard urn:uuid:f81bc72a-ac25-c6bf-c76a-15f0127c2838 Mon, 22 Apr 2024 22:15:00 +0000 <p>The last training course in Zurich was a success, in that no laptops were harmed. I think. I put a poll out on where I should do the training next and quite a few folks suggested the Netherlands. I&apos;m happy to announce that the next VSA course will</p> <p>The last training course in Zurich was a success, in that no laptops were harmed. I think. I put a poll out on where I should do the training next and quite a few folks suggested the Netherlands. I&apos;m happy to announce that the next VSA course will be in the Netherlands on July 17-18th. </p><p>This course approaches this topic from the perspective of refactoring an existing system to this architecture. We also look at larger and larger boundaries of cohesion, from applications to services to systems. I&apos;m also doing something new, a Q&amp;A at a pub where you can ask questions while we share authentic Dutch beer (Heineken).</p><p>More details here:</p><p><a href="https://codeartify.com/events/jimmy-in-netherlands?ref=jimmybogard.com" rel="noreferrer">Vertical Slice Architecture Training</a></p><p>Hope to see you there!</p> Tales from the .NET Migration Trenches - Authentication https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/ Jimmy Bogard urn:uuid:b2a5b599-4700-119d-2274-38a5c477c0b4 Mon, 22 Apr 2024 22:00:50 +0000 <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/" rel="noreferrer">Authentication</a></li></ul><p>Of all the topics in .NET migration, authentication, like always, is the one that is most characterized by &quot;It Depends&quot;. The solution for addressing</p> <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-authentication/" rel="noreferrer">Authentication</a></li></ul><p>Of all the topics in .NET migration, authentication, like always, is the one that is most characterized by &quot;It Depends&quot;. The solution for addressing authentication is wholly dependent on what the current authentication solution is in the current .NET 4.8 application. If you&apos;re doing external SSO, then it&apos;s likely quite simple - the new solution is simply a new client for your external SSO.</p><p>In my situation, the .NET Framework application was responsible for authentication, i.e., it had a login screen. It was a home-grown identity provider, not using ASP.NET Identity. If you&apos;re using ASP.NET Identity and all the database backing stores, you&apos;re also looking at a data migration. I&apos;ll leave that as an exercise to the reader ;)</p><p>The end result we&apos;re looking for is:</p><ul><li>Users can log in via one of the apps (.NET 8 or .NET 4.8)</li><li>Once logged in, both apps recognize the user as authenticated and can read identical claims/roles</li><li>Users can log out via one of the apps</li></ul><p>Our two dumbed down options available to solving this are:</p><ul><li>Remote authentication in ASP.NET 4.8</li><li>Cookie sharing between ASP.NET 4.8 and ASP.NET Core</li></ul><p>The cookie sharing option is intriguing but it has some limitations:</p><ul><li>Only works with <code>Microsoft.Owin</code> cookie authentication</li><li>Requires shared cookie and data protection configuration between applications</li></ul><p>Our application didn&apos;t have that first constraint so we couldn&apos;t consider it. Remote authentication works by:</p><ul><li>Users log in and out of the ASP.NET 4.8 application</li><li>ASP.NET Core adapters call APIs in ASP.NET 4.8 to retrieve user authentication information (claims) and populates its claims identity with this data</li></ul><p>It&apos;s very similar to the remote session story:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/04/readonly_session.png" class="kg-image" alt loading="lazy" width="807" height="498" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/04/readonly_session.png 600w, https://www.jimmybogard.com/content/images/2024/04/readonly_session.png 807w" sizes="(min-width: 720px) 720px"></figure><p>Except getting we&apos;re getting the claims information from the ASP.NET application. This means, however, that the login/logout endpoints will need to be migrated <strong>last</strong>. Which means if our authentication story is complicated, we&apos;ll have plenty of runway since it&apos;ll be last.</p><h3 id="configuring-remote-authentication">Configuring Remote Authentication</h3><p>Configuring remote authentication is straightforward if we&apos;ve already added the remote app server for session. We add a single line of code to the ASP.NET application to <code>AddAuthenticationServer</code>:</p><pre><code class="language-csharp">this.AddSystemWebAdapters() .AddJsonSessionSerializer(options =&gt; { options.RegisterKey&lt;string&gt;(&quot;FavoriteInstructor&quot;); }) // Provide a strong API key that will be used to authenticate the request on the remote app for querying the session // ApiKey is a string representing a GUID .AddRemoteAppServer(options =&gt; options.ApiKey = ConfigurationManager.AppSettings[&quot;RemoteAppApiKey&quot;]) .AddAuthenticationServer() .AddSessionServer(); </code></pre><p>And in our ASP.NET Core application, to add the authentication client:</p><pre><code class="language-csharp">builder.Services.AddSystemWebAdapters() .AddJsonSessionSerializer(options =&gt; { options.RegisterKey&lt;string&gt;(&quot;FavoriteInstructor&quot;); }) .AddRemoteAppClient(options =&gt; { // Provide the URL for the remote app that has enabled session querying options.RemoteAppUrl = new(builder.Configuration[&quot;ProxyTo&quot;]); // Provide a strong API key that will be used to authenticate the request on the remote app for querying the session options.ApiKey = builder.Configuration[&quot;RemoteAppApiKey&quot;]; }) .AddAuthenticationClient(true) .AddSessionClient(); </code></pre><p>There&apos;s a ton of options, because of course authentication is complicated, but this also means we can turn on authentication and authorization as normal in our ASP.NET Core application:</p><pre><code class="language-csharp">app.UseRouting(); app.UseAuthentication(); app.UseAuthorization(); app.UseSystemWebAdapters();</code></pre><p>With this in place, we can access all the normal <code>ClaimsPrincipal</code> and <code>IIdentity</code> details anywhere inside our ASP.NET Core application. We can&apos;t examine the security cookie - but we shouldn&apos;t anyway, our application code should only be concerned with the principal and identity, not the underlying details of how that got populated.</p><p>If we need to add more claims, those will get added on the ASP.NET side and automatically populated on the ASP.NET Core side with those API calls back to get all the claims for the user. It&apos;s another clever shim to allow us to migrate all controllers, actions, and application code that require authentication and authorization.</p><p>In the next post, we&apos;ll look at the middleware that exists in the ASP.NET application and migrate anything we actually want to migrate, and leave the rest behind.</p> Upcoming Training on Modern .NET with Vertical Slice Architecture https://www.jimmybogard.com/upcoming-training-on-vertical-slice-architecture/ Jimmy Bogard urn:uuid:9845daf7-1f25-1599-8d6d-374d6c32ae9f Wed, 07 Feb 2024 21:00:59 +0000 <p>Something new I&apos;m starting this year is a two-day course on Modern .NET systems with Vertical Slice Architecture. It contains a lot of topics that I&apos;ve consulted with organizations and built systems around for around over a decade now, and I wanted to wrap my learnings</p> <p>Something new I&apos;m starting this year is a two-day course on Modern .NET systems with Vertical Slice Architecture. It contains a lot of topics that I&apos;ve consulted with organizations and built systems around for around over a decade now, and I wanted to wrap my learnings up into a single training course.</p><p>And since most of my systems I deal with are not greenfield, but dealing with existing systems, this course focuses on refactoring a system using Vertical Slice Architecture, and all the patterns, tools, and libraries that come along with it. In particular, I&apos;ll be focusing on:</p><ul><li>Refactoring an existing system to leverage Vertical Slice Architecture</li><li>Applying Domain-Driven Design techniques to model complex business needs</li><li>Exploring various design patterns, code smells, and refactoring techniques</li><li>Using the Vertical Slice Architectural pattern in a variety of modern .NET 8 application scenarios (minimal APIs, Blazor, Web APIs, etc.)</li><li>Effective use of common libraries such as AutoMapper and MediatR</li><li>Examining distributed systems patterns, tools, and libraries such as NServiceBus</li></ul><p>The training will be in Zurich, Switzerland on April 9-10. Use the early bird voucher code <strong>EarlyBird20</strong> through the end of February for a 20% discount:</p><p><a href="https://www.letsboot.ch/en-gb/course-date/jimmy-bogard-net-slice-2024-04-09?utm_source=twitter&amp;utm_campaign=jimmy-vertical-slice&amp;utm_medium=personal-post&amp;utm_term=en&amp;utm_content=jimmy-vertical-slice" rel="noreferrer"><strong>Register Now</strong></a></p><p>I hope to see you there!</p> AutoMapper 13.0 Released https://www.jimmybogard.com/automapper-13-0-released/ Jimmy Bogard urn:uuid:f989ee9e-3447-24ee-b7d1-01d00d605f1b Tue, 06 Feb 2024 15:38:00 +0000 <p>Today I pushed out AutoMapper 13.0 (is that too many...?):</p><ul><li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v13.0.0?ref=jimmybogard.com" rel="noreferrer">Release Notes</a></li><li><a href="https://github.com/AutoMapper/AutoMapper/compare/v12.0.1...v13.0.0?ref=jimmybogard.com" rel="noreferrer">Changelog</a></li><li><a href="https://www.nuget.org/packages/AutoMapper?ref=jimmybogard.com" rel="noreferrer">NuGet</a></li><li><a href="https://docs.automapper.org/en/latest/13.0-Upgrade-Guide.html?ref=jimmybogard.com" rel="noreferrer">Upgrade Guide</a></li></ul><p>Probably the biggest change with this release is folding in Microsoft.Extensions.DependencyInjection support directly. The <a href="https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection/?ref=jimmybogard.com" rel="noreferrer">AutoMapper.Extensions.Microsoft.DependencyInjection</a> package is deprecated as a result.</p><p>Side note, the docs were messed up</p> <p>Today I pushed out AutoMapper 13.0 (is that too many...?):</p><ul><li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v13.0.0?ref=jimmybogard.com" rel="noreferrer">Release Notes</a></li><li><a href="https://github.com/AutoMapper/AutoMapper/compare/v12.0.1...v13.0.0?ref=jimmybogard.com" rel="noreferrer">Changelog</a></li><li><a href="https://www.nuget.org/packages/AutoMapper?ref=jimmybogard.com" rel="noreferrer">NuGet</a></li><li><a href="https://docs.automapper.org/en/latest/13.0-Upgrade-Guide.html?ref=jimmybogard.com" rel="noreferrer">Upgrade Guide</a></li></ul><p>Probably the biggest change with this release is folding in Microsoft.Extensions.DependencyInjection support directly. The <a href="https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection/?ref=jimmybogard.com" rel="noreferrer">AutoMapper.Extensions.Microsoft.DependencyInjection</a> package is deprecated as a result.</p><p>Side note, the docs were messed up with this version so go to the &quot;latest&quot; version to see them.</p> Tales from the .NET Migration Trenches - Hangfire https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/ Jimmy Bogard urn:uuid:7bc06e68-1bdc-caf7-b6a1-8e622280c40f Mon, 29 Jan 2024 18:04:40 +0000 <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li></ul><p>In the last post, we encountered our first instance of shared runtime data between our different ASP.NET 4.8 and ASP.NET Core applications, in Session</p> <p>Posts in this series:</p><ul><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches/" rel="noreferrer">Intro</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-catalog" rel="noreferrer">Cataloging</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-empty-proxy" rel="noreferrer">Empty Proxy</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-shared-library" rel="noreferrer">Shared Library</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-controller" rel="noreferrer">Our First Controller</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-migrating-business-logic" rel="noreferrer">Migrating Initial Business Logic</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-our-first-views" rel="noreferrer">Our First Views</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-session-state/" rel="noreferrer">Session State</a></li><li><a href="https://www.jimmybogard.com/tales-from-the-net-migration-trenches-hangfire/" rel="noreferrer">Hangfire</a></li></ul><p>In the last post, we encountered our first instance of shared runtime data between our different ASP.NET 4.8 and ASP.NET Core applications, in Session State. There are other mechanisms to store state in ASP.NET 4.8 (such as Application state), but Session is the most common. In this post, we&apos;ll look at another instance of shared state that isn&apos;t built in to ASP.NET, but I find quite common - <a href="https://www.hangfire.io/?ref=jimmybogard.com" rel="noreferrer">Hangfire</a>.</p><p>Hangfire is an easy way to perform background tasks/processes in a .NET web application, and it also supports persistent storage for both the jobs and queues. I use it quite a lot in applications where I don&apos;t want to introduce a separate host for processing messages, or introduce a specific queue/broker for background jobs. Hangfire supports fire-and-forget jobs as well as &quot;cron&quot;-based jobs. It also provides a nice dashboard where you can see completed and failed jobs, with the option of retrying failed jobs as desired.</p><p>Depending on how you&apos;re using Hangfire, it introduces a unique challenge when migrating from .NET 4.8 to .NET 6/7/8. Hangfire supports both frameworks, but as usual, the devil is in the details. We want to be able to start/consume jobs from both sides AND ensure our job executes at most once.</p><p>First, let&apos;s look to see how we configure and use Hangfire today.</p><h3 id="aspnet-48-hangfire-usage">ASP.NET 4.8 Hangfire Usage</h3><p>In our OWIN startup in the ASP.NET 4.8 application, we find our Hangfire configuration:</p><pre><code class="language-csharp">GlobalConfiguration.Configuration .UseSqlServerStorage(&quot;SchoolContext&quot;) .UseStructureMapActivator(IoC.Container) ; app.UseHangfireDashboard(); app.UseHangfireServer(); </code></pre><p>We can see here that we&apos;re using SQL Server for our storage (jobs and queues), and that we&apos;re using a DI container (StructureMap) for activating/instantiating jobs. We don&apos;t see it explicitly configured but our job is using the default queue, named <code>default</code>.</p><p>Our jobs can be enqueued anywhere really, from controllers to services to startup. For anything that migrates to ASP.NET Core that uses Hangfire, we&apos;ll have to migrate that usage as well. Here&apos;s one usage:</p><pre><code class="language-csharp">[HttpPost] [ValidateAntiForgeryToken] public async Task&lt;ActionResult&gt; Edit(Edit.Command command) { await _mediator.Send(command); _backgroundJobClient.Enqueue(() =&gt; LogEdit(command)); return this.RedirectToActionJson(c =&gt; c.Index(null)); } [NonAction] public void LogEdit(Edit.Command command) { _logger.Information($&quot;Editing student {command.ID}&quot;); } </code></pre><p>It&apos;s completely trivial, but let&apos;s assume the background job is actually doing something interesting, like sending emails or SMS messages.</p><p>If we were to migrate this by itself over to ASP.NET Core, we&apos;ll immediately run into an issue - Hangfire is now running in two places - ASP.NET and ASP.NET Core, and if we do nothing additional, Hangfire in each server will try to consume those jobs. Unfortunately, it might not be able to <em>execute</em> those jobs. In the above example, the job exists simply as a method from my controller - a perfectly valid way of using Hangfire. If this method only exists in one of the web applications, the <em>other</em> web application won&apos;t be able to execute the job and it will wind up failing.</p><p>Hangfire does support web farm scenarios and the competing consumer pattern, so we still know that only one side or the other will pick up the job. But it might not be able to execute it if the job code isn&apos;t there.</p><p>We could fix this by migrating all of our job code to the &quot;shared&quot; assembly first, but this might be a complex undertaking especially if we&apos;re using the pattern above. Instead, we can create separate queues for each host - ASP.NET 4.8 and ASP.NET Core, and ensure the job is queued to the host where that job code lives.</p><p>When both live in ASP.NET Core:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://www.jimmybogard.com/content/images/2024/01/image-2.png" class="kg-image" alt loading="lazy" width="1432" height="626" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/01/image-2.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/01/image-2.png 1000w, https://www.jimmybogard.com/content/images/2024/01/image-2.png 1432w" sizes="(min-width: 720px) 720px"><figcaption><span style="white-space: pre-wrap;">\</span></figcaption></figure><p>When the initiator is ASP.NET Core and the job lives in ASP.NET 4.8:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/01/image-3.png" class="kg-image" alt loading="lazy" width="1432" height="626" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/01/image-3.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/01/image-3.png 1000w, https://www.jimmybogard.com/content/images/2024/01/image-3.png 1432w" sizes="(min-width: 720px) 720px"></figure><p>And the reverse:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/01/image-4.png" class="kg-image" alt loading="lazy" width="1432" height="626" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/01/image-4.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/01/image-4.png 1000w, https://www.jimmybogard.com/content/images/2024/01/image-4.png 1432w" sizes="(min-width: 720px) 720px"></figure><p>And finally solely ASP.NET 4.8:</p><figure class="kg-card kg-image-card"><img src="https://www.jimmybogard.com/content/images/2024/01/image-5.png" class="kg-image" alt loading="lazy" width="1432" height="628" srcset="https://www.jimmybogard.com/content/images/size/w600/2024/01/image-5.png 600w, https://www.jimmybogard.com/content/images/size/w1000/2024/01/image-5.png 1000w, https://www.jimmybogard.com/content/images/2024/01/image-5.png 1432w" sizes="(min-width: 720px) 720px"></figure><p>With this setup, the job initiator must &quot;know&quot; where the job code lives. This might seem like unnecessary coupling, but keep in mind this is transitional configuration and we won&apos;t need to have this knowledge baked in once all of the jobs and initiators are migrated.</p><h3 id="configuring-for-multiple-hosts">Configuring for Multiple Hosts</h3><p>Initially, we did not specify any queue in our Hangfire configuration. Now, we&apos;ll be explicit in ASP.NET 4.8:</p><pre><code class="language-csharp">app.UseHangfireServer(new BackgroundJobServerOptions { Queues = new[] { Queues.Default } });</code></pre><p>And after pulling in the appropriate packages to ASP.NET Core, we configure our startup there with the other queue:</p><pre><code class="language-csharp">builder.Services.AddHangfire(cfg =&gt; { cfg.UseSqlServerStorage( builder.Configuration.GetConnectionString(&quot;SchoolContext&quot;)); }); builder.Services.AddHangfireServer(options =&gt; options.Queues = new[] { Queues.DefaultCore } ); </code></pre><p>I could migrate the Hangfire dashboard over to ASP.NET Core, but I left it alone for now. The YARP piece will take care of that for now.</p><p>For jobs that start and stay inside of one host, there&apos;s nothing we need to do to specify a queue. However, for jobs that cross host boundaries, we&apos;ll specify the queue name in the job:</p><pre><code class="language-csharp">[NonAction] [Queue(Queues.DefaultCore)] public void LogEdit(Edit.Command command) { _logger.Information($&quot;Editing student {command.ID}&quot;); } </code></pre><p>This ensures that the jobs start and stop where they&apos;re supposed to, and are only executed at most once. Once all of our jobs are migrated, we can rename the queue to the default name (as long as we&apos;ve drained our job queues beforehand).</p><p>So far, all of our actions don&apos;t require a logged in user. In the next post, we&apos;ll tackle authentication.</p> SSH on WSL http://aspiringcraftsman.com/2022/07/01/ssh-on-wsl.html Aspiring Craftsman urn:uuid:3b8d779e-bac3-a739-db05-ece712be54e3 Fri, 01 Jul 2022 08:00:00 +0000 <p style="text-align: center"> <a href="/wp-content/uploads/2022/07/01/ssh.png"><img title="SSH" alt="" src="/wp-content/uploads/2022/07/01/ssh.png" width="640" height="480" sizes="(max-width: 640px) 100vw, 640px" /></a> </p> <p>I recently set up a Windows machine to allow me to ssh into its WSL network from another box. I found a couple of useful guides <a href="https://www.hanselman.com/blog/how-to-ssh-into-wsl2-on-windows-10-from-an-external-machine">here</a> and <a href="https://www.hanselman.com/blog/how-to-ssh-into-wsl2-on-windows-10-from-an-external-machine">here</a>, but still ran into a few snags along the way, so thought I’d publish my configuration in case it might be useful to someone else (or perhaps even myself) in the future. Here are my steps:</p> <h1 id="step-1-install-openssh-in-wsl">Step 1: Install OpenSSH in WSL</h1> <p>Our first step will be to ensure openssh-server is installed. If not, issue the following and follow the prompts:</p> <div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">sudo </span>apt <span class="nb">install </span>openssh-server </code></pre></div></div> <h1 id="step-2-configure-sshd">Step 2: Configure SSHD</h1> <p>Edit the file /etc/ssh/sshd_config to allow the desired users or groups. You’ll need to edit this with root access:</p> <div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">sudo </span>vi sshd_config </code></pre></div></div> <p>To allow specific users, you can add the following with a list of users where <user1> is replaced with the desired user:</user1></p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>AllowUsers &lt;user1&gt; &lt;user2&gt; &lt;userN&gt; </code></pre></div></div> <p>To allow all users in one or more groups, add the following:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>AllowGroups &lt;group1&gt; &lt;group2&gt; &lt;groupN&gt; </code></pre></div></div> <p>Note that these are users and groups known to WSL, not your windows users and groups (i.e. /etc/password, /etc/group).</p> <h1 id="step-3-setup-startup-script">Step 3: Setup Startup Script</h1> <p>Create a new file named startup-ssh.sh in /usr/local/bin with the following contents:</p> <div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span> <span class="nv">WSL_IP</span><span class="o">=</span><span class="si">$(</span>ip addr show eth0| <span class="nb">grep</span> <span class="nt">-oP</span> <span class="s1">'(?&lt;=inet\s)\d+(\.\d+){3}'</span><span class="si">)</span> <span class="nv">NETSH_CMD</span><span class="o">=</span>/mnt/c/Windows/System32/netsh.exe <span class="nv">SSL_PORT</span><span class="o">=</span>22 <span class="nv">FIREWALL_RULE_NAME</span><span class="o">=</span><span class="s2">"SSH Port </span><span class="k">${</span><span class="nv">SSL_PORT</span><span class="k">}</span><span class="s2">"</span> <span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Resetting port proxy settings ... "</span> <span class="k">${</span><span class="nv">NETSH_CMD</span><span class="k">}</span> interface portproxy reset all 2&gt;&amp;1 1&gt;/dev/null <span class="o">[</span> <span class="nv">$?</span> <span class="o">==</span> 0 <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">echo</span> <span class="s2">"OK"</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">"Error"</span> <span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Forwarding port </span><span class="k">${</span><span class="nv">SSL_PORT</span><span class="k">}</span><span class="s2"> to </span><span class="k">${</span><span class="nv">WSL_IP</span><span class="k">}</span><span class="s2"> ... "</span> <span class="k">${</span><span class="nv">NETSH_CMD</span><span class="k">}</span> interface portproxy add v4tov4 <span class="nv">listenaddress</span><span class="o">=</span>0.0.0.0 <span class="nv">listenport</span><span class="o">=</span><span class="k">${</span><span class="nv">SSL_PORT</span><span class="k">}</span> <span class="nv">connectaddress</span><span class="o">=</span><span class="k">${</span><span class="nv">WSL_IP</span><span class="k">}</span> <span class="nv">connectport</span><span class="o">=</span><span class="k">${</span><span class="nv">SSL_PORT</span><span class="k">}</span> 2&gt;&amp;1 1&gt;/dev/null <span class="o">[</span> <span class="nv">$?</span> <span class="o">==</span> 0 <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">echo</span> <span class="s2">"OK"</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">"Error"</span> <span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Adding firewall rule if not present ... "</span> <span class="k">if</span> <span class="k">${</span><span class="nv">NETSH_CMD</span><span class="k">}</span> advfirewall firewall show rule <span class="nv">name</span><span class="o">=</span><span class="s2">"</span><span class="k">${</span><span class="nv">FIREWALL_RULE_NAME</span><span class="k">}</span><span class="s2">"</span> 2&gt;&amp;1 1&gt;/dev/null <span class="k">then </span><span class="nb">echo</span> <span class="s2">"OK"</span> <span class="k">else</span> <span class="k">${</span><span class="nv">NETSH_CMD</span><span class="k">}</span> advfirewall firewall add rule <span class="nv">name</span><span class="o">=</span><span class="s2">"</span><span class="k">${</span><span class="nv">FIREWALL_RULE_NAME</span><span class="k">}</span><span class="s2">"</span> <span class="nb">dir</span><span class="o">=</span><span class="k">in </span><span class="nv">action</span><span class="o">=</span>allow <span class="nv">protocol</span><span class="o">=</span>TCP <span class="nv">localport</span><span class="o">=</span><span class="k">${</span><span class="nv">SSL_PORT</span><span class="k">}</span> 2&gt;&amp;1 1&gt;/dev/null <span class="o">[</span> <span class="nv">$?</span> <span class="o">==</span> 0 <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">echo</span> <span class="s2">"OK"</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">"Error"</span> <span class="k">fi </span><span class="nb">echo</span> <span class="nt">-n</span> <span class="s2">"Starting WSL ssh server ... "</span> <span class="nb">sudo</span> /etc/init.d/ssh start 2&gt;&amp;1 1&gt;/dev/null <span class="o">[</span> <span class="nv">$?</span> <span class="o">==</span> 0 <span class="o">]</span> <span class="o">&amp;&amp;</span> <span class="nb">echo</span> <span class="s2">"OK"</span> <span class="o">||</span> <span class="nb">echo</span> <span class="s2">"Error"</span> </code></pre></div></div> <p>This script does the following things primarily:</p> <ul> <li>Forwards traffic going to port 22 from your Windows system to port 22 on the WSL virtual machine</li> <li>Adds a Windows firewall rule to allow port 22 traffic</li> <li>Starts the ssh server</li> </ul> <p>We will be configuring Windows to execute this script on startup. The reason this is necessary is that WSL obtains a new IP address each time the system starts. This results in the need to reset the port forwarding and reapply with the latest WSL IP address.</p> <h1 id="step-4-configure-super-user-execution">Step 4: Configure Super User Execution</h1> <p>As indicated in step 5 of <a href="https://faun.pub/how-to-setup-ssh-connection-on-ubuntu-windows-subsystem-for-linux-2b36afb943dc">this guide</a>, we need to allow the ssh command to be started without prompting for a password. We do this by editing the /etc/sudoers file. This can be done with the following command:</p> <div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nv">$ </span><span class="nb">sudo </span>visudo </code></pre></div></div> <div class="theme-note"> Note: <p>The purpose of editing the /etc/sudoers file using the visudo command is to validate the syntax before saving. You could edit the file directly, but if you screw something up then you could lock yourself out of gaining root access.</p> <p>When I first used this command, it launched using the nano editor with which I’m not familiar. You can configure which editor is used by executing the following command:</p> <pre> $ sudo select-editor </pre> <p>Alternately, you can set your EDITOR environment variable to the desired editor and use the following command:</p> <pre> $ sudo -E visudo </pre> </div> <p>Add the following as the last line of the file:</p> <div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code>%sudo <span class="nv">ALL</span><span class="o">=</span>NOPASSWD: /etc/init.d/ssh </code></pre></div></div> <h1 id="step-5-configure-windows-task-scheduler">Step 5: Configure Windows Task Scheduler</h1> <p>Our final step will be to configure Windows Task Scheduler to launch our startup script when the system starts. Use the following steps:</p> <ul> <li> <p>Open the Task Scheduler app from the Windows Start Menu</p> </li> <li> <p>Select <code class="language-plaintext highlighter-rouge">Create Basic Task</code> from the right panel</p> </li> <li> <p>Create task with the following parameters:</p> <p>Name: Start SSH Server</p> <p>Description: Task to automate sshd startup</p> <p>Trigger: Select When the computer starts</p> <p>Action: Select Start a program</p> <p>Program/script :<code class="language-plaintext highlighter-rouge">%windir%\System32\wsl.exe</code> Add arguments (optional): <code class="language-plaintext highlighter-rouge">-d Ubuntu -e "/usr/local/bin/startup-ssh.sh"</code></p> </li> <li> <p>Confirm everything is correct and click <code class="language-plaintext highlighter-rouge">Finish</code></p> </li> </ul> <h1 id="step-7-verify-configuration">Step 7: Verify Configuration</h1> <p>Our last step is to verify we have everything configured correctly. In Task Scheduler, locate the “Start SSH Server” task and in the right panel click “Run”. If successful, you should be able to ssh from another machine to your Windows WSL virtual machine:</p> <div class="language-sh highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>ssh dgreer@dgreer-pc </code></pre></div></div> Perhaps Too Much Validation http://aspiringcraftsman.com/2022/06/22/perhaps-too-much-validation.html Aspiring Craftsman urn:uuid:edcaf6d8-249b-709a-6250-5361404dc9fa Wed, 22 Jun 2022 08:00:00 +0000 Several factors have influenced my coding style over the years leaving me with a preference toward lean code syntax. I’ve been developing for quite a while, so it would be hard to pinpoint exactly when, where, or from whom I’ve picked up various preferences, but to name a few, I prefer: code that only includes comments for public APIs or to provide explanation of algorithms; code that is free of the use of regions, explicit default access modifiers, and unused using statements; reliance upon convention over configuration (both to eliminate repetitive tasks, but also just to eliminate unnecessary code); encapsulating excessive parameters into a Parameter Object, avoidance of excessive use of attributes/annotations (actually, I’d eliminate them completely if I could), and of course deleting dead code. There is one other practice I tend to see by other developers that I dislike and that’s too much validation. <p>Several factors have influenced my coding style over the years leaving me with a preference toward lean code syntax. I’ve been developing for quite a while, so it would be hard to pinpoint exactly when, where, or from whom I’ve picked up various preferences, but to name a few, I prefer: code that only includes comments for public APIs or to provide explanation of algorithms; code that is free of the use of regions, explicit default access modifiers, and unused using statements; reliance upon convention over configuration (both to eliminate repetitive tasks, but also just to eliminate unnecessary code); encapsulating excessive parameters into a <a href="https://wiki.c2.com/?ParameterObject">Parameter Object</a>, avoidance of excessive use of attributes/annotations (actually, I’d eliminate them completely if I could), and of course deleting dead code. There is one other practice I tend to see by other developers that I dislike and that’s too much validation.</p> <p>Perhaps you’ve seen code like this:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">MyService</span> <span class="p">{</span> <span class="k">public</span> <span class="nf">DoSomething</span><span class="p">(</span><span class="n">IDependencyA</span> <span class="n">dependencyA</span><span class="p">,</span> <span class="n">IDependencyB</span> <span class="n">dependencyB</span><span class="p">,</span> <span class="n">IDependencyC</span> <span class="n">dependencyC</span><span class="p">)</span> <span class="p">{</span> <span class="k">if</span><span class="p">(</span><span class="n">dependencyA</span> <span class="k">is</span> <span class="k">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">dependencyA</span><span class="p">));</span> <span class="p">}</span> <span class="k">if</span><span class="p">(</span><span class="n">dependencyB</span> <span class="k">is</span> <span class="k">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">dependencyB</span><span class="p">));</span> <span class="p">}</span> <span class="k">if</span><span class="p">(</span><span class="n">dependencyC</span> <span class="k">is</span> <span class="k">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">dependencyC</span><span class="p">));</span> <span class="p">}</span> <span class="p">}</span> <span class="err">…</span> <span class="p">}</span> </code></pre></div></div> <p>Perhaps you even think this is a best practice. Is it? As with many things, the answer is really: It depends. One of the things that has greatly shaped my views on several aspects of software development over the years is adopting Test-Driven Development. The “test” part of the name is really a hold-over from adapting the practice of writing Unit Tests for driving design. With Unit Testing, you’re <em>testing</em> the code you’ve written. With Test-Driven Development, you’re <em>constraining the design</em> of the code to meet a set of specifications. It’s really quite a difference and one you may not fully appreciate unless you fully buy in to doing it for an extended period of time.</p> <p>One of the side-effects of practicing TDD is that you don’t write code unless it’s needed to satisfy a failing test. The use of code coverage tools are basically superfluous for TDD practitioners, not to mention rendering far superior regression test suites. What, however, does this have to do with validation?</p> <p>When driving out implementation through a series of executable specifications (i.e. an objective list of exactly how the software should work), we may end up writing code which technically <em>could</em> be called a certain way which would result in exceptions or logical errors, but in <em>practice</em> never is. As it relates to this topic, all code we write can be grouped into two categories: public and private. In this sense I’m not talking about the access modifiers we place upon the code artifacts themselves, but the intended use of the code. Is the code you’re writing going to be used by others, or is it just code we’re calling internally within our applications? If it’s code you’re driving out through TDD which others will be calling, then you should have specifications which describe how the code will react when used correctly as well as incorrectly and thus will have the appropriate amount of validation. If it isn’t code anyone else will be, or currently is calling (see also YAGNI), then the components which <em>do</em> call it will have been designed such that they don’t call the component incorrectly rending such validation useless.</p> <p>Let’s consider our code again:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">public</span> <span class="k">class</span> <span class="nc">MyService</span> <span class="p">{</span> <span class="k">public</span> <span class="nf">DoSomething</span><span class="p">(</span><span class="n">IDependencyA</span> <span class="n">dependencyA</span><span class="p">,</span> <span class="n">IDependencyB</span> <span class="n">dependencyB</span><span class="p">,</span> <span class="n">IDependencyC</span> <span class="n">dependencyC</span><span class="p">)</span> <span class="p">{</span> <span class="k">if</span><span class="p">(</span><span class="n">dependencyA</span> <span class="k">is</span> <span class="k">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">dependencyA</span><span class="p">));</span> <span class="p">}</span> <span class="k">if</span><span class="p">(</span><span class="n">dependencyB</span> <span class="k">is</span> <span class="k">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">dependencyB</span><span class="p">));</span> <span class="p">}</span> <span class="k">if</span><span class="p">(</span><span class="n">dependencyC</span> <span class="k">is</span> <span class="k">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">throw</span> <span class="k">new</span> <span class="nf">ArgumentNullException</span><span class="p">(</span><span class="k">nameof</span><span class="p">(</span><span class="n">dependencyC</span><span class="p">));</span> <span class="p">}</span> <span class="p">}</span> <span class="err">…</span> <span class="p">}</span> </code></pre></div></div> <p>If this is an internal service that isn’t going to be called by any other code except other components within your application, we have 14 lines of code that are unneeded and are just adding noise to our code. I’ve worked in shops where every class in an application or library was coded this way, effectively adding hundreds to thousands of lines of unneeded code. Like regions, comments, or poorly factored code, this adds to the cognitive load required for reading through and understanding the code and ultimately is unnecessary. So the next time you reflexively start adding such validation, consider the possibility that perhaps you may be adding too much validation.</p> For Whom is this Container? http://aspiringcraftsman.com/2022/06/01/for-whom-is-this-container.html Aspiring Craftsman urn:uuid:a002a115-ef0f-7cf6-dc66-bfb35eedf380 Wed, 01 Jun 2022 08:00:00 +0000 Several of the Messaging platforms in the .Net space have pretty rudimentary APIs (e.g. RabbitMq, Kafka) which require quite a bit of boiler-plate code to be written to get a simple message published and subscribed. You could turn to one of the Conforming Abstraction libraries such as NServiceBus or MassTransit, but perhaps you don’t really want a lowest-common denominator API, you don’t like something about how it creates the messages or topic/queue artifacts, or you simply want a fluent API expressed in terms of the native platform’s nomenclature and behavior. This might lead you down the road of creating your own KafkaBus, SQSBus, RabbitMqBus, etc. that feels like the API you wish the original development team had just provided for you to begin with. Ah, but now you have a dilemma: Frameworks such as this tend to require a number of components you’ll need to compose, many of which you may want to allow users to configure (e.g. serialization needs, consumer class conventions, logging, produce and consume pipelines, etc.). You could write hand-rolled factories, builders, singletons, etc. to facilitate the configuration and building of instances of your components, but you know that using a dependency injection container would make both development and long-term maintenance of your library much easier. But now you have another dilemma: Are you going to tie your project to some open-source container? If so, which one? Should you support a handful of the most popular ones? Should you just rely upon the Service Locator pattern and provide configuration for end users should they want to resolve from their own containers? <p>Several of the Messaging platforms in the .Net space have pretty rudimentary APIs (e.g. RabbitMq, Kafka) which require quite a bit of boiler-plate code to be written to get a simple message published and subscribed. You could turn to one of the Conforming Abstraction libraries such as NServiceBus or MassTransit, but perhaps you don’t really want a lowest-common denominator API, you don’t like something about how it creates the messages or topic/queue artifacts, or you simply want a fluent API expressed in terms of the native platform’s nomenclature and behavior. This might lead you down the road of creating your own KafkaBus, SQSBus, RabbitMqBus, etc. that feels like the API you <em>wish</em> the original development team had just provided for you to begin with. Ah, but now you have a dilemma: Frameworks such as this tend to require a number of components you’ll need to compose, many of which you may want to allow users to configure (e.g. serialization needs, consumer class conventions, logging, produce and consume pipelines, etc.). You could write hand-rolled factories, builders, singletons, etc. to facilitate the configuration and building of instances of your components, but you know that using a dependency injection container would make both development and long-term maintenance of your library much easier. But now you have another dilemma: Are you going to tie your project to some open-source container? If so, which one? Should you support a handful of the most popular ones? Should you just rely upon the Service Locator pattern and provide configuration for end users should they want to resolve from their own containers?</p> <p>This was essentially the dilemma the ASP.Net Core team found themselves in when they set out to develop .Net Core. They had a fairly sizable framework with a lot of moving parts, many of which they wanted to allow the end user to configure. Earlier versions of ASP.Net MVC were built using a Service Locator pattern implementation which facilitated the ability to configure resolving from an open-source container of your choice. This, however, would have no doubt presented various design limitations in addition to the resulting lack of elegance to the resulting codebase, so the team decided to build the new platform from the ground up using dependency injection. They couldn’t, however, feasibly decide to couple their framework to one of the already mature and successful open source DI containers for various reasons. This prompted them to write their own.</p> <p>One of the keys to understanding the capabilities offered by .Net Core’s container compared to other libraries is recognizing that they built it for their needs, not yours. There is no doubt that there was recognition of the usefulness for some developers to have an out-of-the-box DI container, but they didn’t set out to build a container to compete with the already extremely mature frameworks such as Autofac, StructureMap, or Ninject. For instance, because they weren’t developing user interactive client-facing applications, they didn’t have needs such as convention-based scanning registration, multi-tenancy support, the need for decorators, etc. Their needs pretty much were limited to known types with lifetime scopes of transient, singleton, or scoped per request.</p> <p>Oddly, there is now a whole new generation of .Net developers which have never used a DI container other than that provided by the Microsoft Extensions suite which are missing out on being exposed to solutions to problems for which containers like Autofac, Lamar, and others facilitate fairly easily, largely I believe because no one has ever really told them: Microsoft didn’t <em>really</em> write that for you.</p> Pragmatic Deferral https://lostechies.com/derekgreer/2022/05/31/pragmatic-deferral/ Los Techies urn:uuid:b8e5fba6-f2d8-e502-10ab-7971b60ca56e Tue, 31 May 2022 13:00:00 +0000 Software engineering is often about selecting the right trade offs. While deferring feature development is often somewhat straight-forward, based upon a speculation about the return on investment, and generally decided by the customer; marketing; sales; or product people; low-level implementation decisions are typically made by the development team or individual developers and can often prove to be a bit more contentious among teams with a plurality of strong opinions. This is where principles like YAGNI (You’re Aren’t Going to Need It), or the Rule of Three have often been set forth as a guiding heuristic. <p>Software engineering is often about selecting the right trade offs. While deferring feature development is often somewhat straight-forward, based upon a speculation about the return on investment, and generally decided by the customer; marketing; sales; or product people; low-level implementation decisions are typically made by the development team or individual developers and can often prove to be a bit more contentious among teams with a plurality of strong opinions. This is where principles like YAGNI (You’re Aren’t Going to Need It), or the Rule of Three have often been set forth as a guiding heuristic.</p> <p>While I generally advise the teams I coach to allow the executable specifications (i.e. the tests) to drive emergent design and to defer the introduction of ancillary libraries, frameworks, patterns, and custom infrastructure, until you need it, there is a level of pragmatism that I employee when determining when to introduce such things.</p> <p>I’ve been a fan of Test-Driven Development for some time now and have practiced it for over a decade. One of the primary benefits of Test-Driven Development is having an objective measure guiding what needs to get built. For example, if the acceptance criteria for a User Story concerns building a new Web API for a company’s custom B2B solution, your specs are going to drive out some sort of HTTP-based API. What the specs won’t dictate, however, are decisions such as whether to use an MVC framework, an IOC container, whether to introduce a fluent validation library or an object mapping library. Should we adhere strictly to principles like <a href="https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it">YAGNI</a> or the <a href="https://en.wikipedia.org/wiki/Rule_of_three_(computer_programming)">Rule of Three</a> for guidance here? My answer is: it depends.</p> <p>Deferring software decisions comes with quite a range of consequences. Some decisions, such as whether to select ASP.NET MVC at the outset of a .Net-based Web application, could cause quite a bit of rework if you were to defer such a decision until working with lower-level components started to reveal friction or duplication. Other decisions, such as deferring the introduction of an object mapping library (e.g. Automapper) until the shape of the objects you’re returning actually differ from your entities essentially have only positive consequences. But how do we know?</p> <p>The YAGNI principle is very similar to the firearm safety rule “The Gun is Always Loaded”. No, the gun isn’t always loaded … but it’s best to treat it like it is. Similarly, “You aren’t going to need it” doesn’t really mean you may not need it, but it’s intended to help you avoid unnecessary work. That is, until it causes more work.</p> <p>In software engineering, the more you code, the more you’ll have to maintain. The Art of Not Doing Stuff, when correctly applied, can save companies as much or more money than building the right things. While I’m not religious these days, there’s a definition of the term “Hermeneutics” that I heard years ago from a Christian radio personality, Hank Hanegraaff. He would say: “Hermeneutics is the art and science of biblical interpretation”. He would go on to explain, it’s a science because it’s guided by a system of rules, but it’s an art in that you get better at it the more you do it. Having heard that explanation years ago, I have long felt these properties are equally descriptive of software development.</p> <p>For myself, I take a pragmatic approach to YAGNI in that I make selections for a number of things at the outset of a new project which I’ve recognized, through experience, has resulted in less friction down the road; and I defer choices which I reason to have little to no cost by implementing at the point implementing a given User Story’s acceptance criteria drives the need. For example, I do start off setting up a Web project using ASP.NET MVC. I do set up end-to-end testing infrastructure. I do add an open source DI container and set up convention-based registration. These are things which I’ve found actually cause me more friction if I pretend I’m not going to need them. I don’t want to implement my own IHttpHandler and wait until I see the need for a robust routing and pipeline framework and have to go back and reimplement everything. I don’t want to be hand-rolling factories over and over and have to go back and modify code at the point enough duplication reveals the need for dependency injection, and I don’t want to edit a Startup.cs or other bootstrapper component each time a component has a new dependency. Outside of these few concerns, however, I do typically defer things until needed.</p> Magical Joy https://lostechies.com/derekgreer/2022/05/27/magical-joy/ Los Techies urn:uuid:51894de0-4304-2d28-6aa8-57666890c876 Fri, 27 May 2022 13:00:00 +0000 In a segment of an interview with host Byron Sommardahl on The Driven Developer Podcast, recorded in the summer of 2021, Byron and I discussed a bit about a pattern I introduced to our project when we worked together in 2010 which Byron later dubbed “The Magical Joy Bus” <p>In a segment of an interview with host Byron Sommardahl on <a href="https://podcasts.apple.com/us/podcast/all-things-senior-derek-greer/id1584867029?i=1000541910261">The Driven Developer Podcast</a>, recorded in the summer of 2021, Byron and I discussed a bit about a pattern I introduced to our project when we worked together in 2010 which Byron later dubbed “The Magical Joy Bus” Nine Years Remote http://aspiringcraftsman.com/2022/05/26/nine-years-remote.html Aspiring Craftsman urn:uuid:3032cd6e-d96e-34a3-3acc-4ff8654e9927 Thu, 26 May 2022 13:00:00 +0000 A recent inquiry from a recruiter about accepting a partially-remote position prompted me to reflect upon 9 years of working remotely as a software developer. <p>A recent inquiry from a recruiter about accepting a partially-remote position prompted me to reflect upon 9 years of working remotely as a software developer.</p> <p>When I first started working from home, attitudes were quite different than they are in today’s post COVID-19 world. Full time remote software development jobs were few and far between, and most employers that allowed working remotely full time did so due to factors other than a belief that it was more productive and cost-effective. Studies since have overwhelmingly shown that the majority were simply wrong.</p> <p>One interesting side-effect of the previous year’s COVID-19 political entanglement is the degree to which it forced an entire generation of closed-minded, micro-managing executives to consider (through necessity) that remote work forces, especially for primarily thoughtwork-based positions, were not only viable, but perhaps even superior.</p> <p>When our entire society started shutting down due to concerns over the COVID-19 virus, I actually hardly noticed at first. Having transitioned to full-time remote work in early 2014, I had long since become accustomed to working remotely by the time society started shutting down. Prior to landing my first full-time remote position, I had worked at a couple of prior companies which allowed working remotely a couple of days a week, so I had some notion of its viability even before then.</p> <p>While I was already used to working remotely, the whole pandemic thing actually helped to improve the lives of remote developers by remedying many of the productivity nuciences that plagued fully-remote as well as mixed-teams. To a large extent, the primary issues that remote workers had to face prior to everything being shut down was the lack of remote workforce accommodations, namely: mature or provided collaboration tools (e.g Slack, Zoom, Miro, etc.) and equal participation of remote workers on mixed-teams. While David Fullerton, in a <a href="https://stackoverflow.blog/2013/02/01/why-we-still-believe-in-working-remotely/">StackOverflow blog article</a> written back in 2013, had proffered up the wisdom that “<em>If even one person on the team is remote, every single person has to start communicating online</em>”, joining any mixed team for many still resulted in the remote worker likely being marginalized in meetings as they were the only one on a call while all their co-workers debated approaches around a conference table, were forced to watch some white boarding design session over a video camera while you tried to make out what everyone was saying, or were simply being left out of key social interactions resulting in being professionally disadvantaged in key business decisions due to the formation of clicks, or simply not being present during unplanned discussions, etc. Conscientious employees working from home already knew they were far more efficient at home than in the office, as well as knowing that non-conscientious workers were just a likely or more so to screw off at work as they were at home, but it took everyone being forced to do it for an extended period of time to hammer than into the heads of many executives that felt uncomfortable with conducting business differently than they had in the 20th century.</p> <p>One absolutely huge thing that goes seemingly undiscussed is the financial impact of working remotely vs. commuting. While I commuted to the office for 20 years before transitioning to full-time remote, it wasn’t until I had become accustomed to working from home and was confronted with the idea of returning back to the world of the commuting zombies that my perspective changed with respect to that commute time. Prior to accepting a full time remote job in early 2014, my commute time was approximately 1 hour one way, and that was on a good day when there wasn’t some minor traffic incident which could easily (and fairly regularly did) add an extra 20-30 minutes to my time. Once I had become accustomed to working remotely, the idea of tacking on an extra 5-10 hours a week in commute time to switch back to a job requiring you to work in the office seemed more like giving my time away for free. Prior to that, all those hours in the vehicle dealing with idiots on the road was just an assumed necessity. Driving to work was like driving anywhere else. Of course in the 20th century you had to drive to buy a new pair of shoes. Of course you had to drive to see a newly released movie. Of course you had to drive to go get a cheeseburger meal at McDonald’s. And of course, you had to drive to get to work. You didn’t think twice about it. You didn’t view commuting to work as 5-10 hours of your personal time given over to your employer for free for the privilege of employment any more than you’d have thought that McDonald’s owed you money for driving to their store to eat. Sure, you could listen to music, or talk radio, or a podcast, or an audio book. It wasn’t, however, really what you would have chosen to be doing at 6:30 in the morning. It wasn’t <em>your</em> time.</p> <p>Prior to COVID, trying to explain this perspective to those still in the office world was very much like Morpheus trying to explain to Neo that he’s in the Matrix. Sure, recruiters or employers could understand the logic of an argument that commuting is time given to an employer essentially for free, but many would just think it ridiculous for you to go so far as to demand a higher salary for accepting a position requiring a commute (when you knew is wasn’t really required to do the job). This doesn’t even account for wear and tear on vehicles, gas expenses, or the little micro-batches of time you end up spending doing things like food prep, additional “get ready” time, more laundry, etc. that you wouldn’t otherwise do if you were staying home for the day. Moreover, just because you compensate someone for their time, there’s a threshold beyond which your standard hourly rate isn’t worth the time. Okay, you may be willing to commute if your employer is going to compensate you for the extra 5-10 hours on top of the 40 you’re going to spend sitting in their cube farm under their fluorescent lighting (“Not near a window, Jim, because those seats are reserved for managers!”). Are you, however, willing to exchange that extra 5-10 hours a week for money to sit in the office for 45 hours? How about 50 hours? 60? At some point, it isn’t about whether you’re compensated or not. Hell, 40 hours a week really is too damn many hours to begin with. Add to that the insane perspective on time off that Americans get on average compared to much of the rest of the developed world. Hell, even plumbers and HVAC workers get paid for their commute time, and their job isn’t something that can be done remotely.</p> <p>Imagine if everyone actually accounted for these additional expenses when factoring in the pay they are willing to accept. If so, this would likely account for an extra 25-30% pay increase, accounting for time and travel expenses. For businesses on the fence about whether remote is better than on-site for their bottom line, this would certainly tip the scales. Currently, however, they aren’t forced to think this way. Or at least, many are still operating in a mindset that they don’t have to think this way. When it really comes down to it, a culture of requiring anyone that can do their job remotely to work in the office is really stealing from your employees. Fortunately, COVID has corrected this situation given there have been enough eyes opened to the benefits of remote work and enough businesses which have seen the waste that goes into buying or renting commercial real estate that, even after many business have begun attempting to force employees back into the office, there’s enough employers who now offer remote opportunities to give people a real choice.</p> User Stories https://lostechies.com/derekgreer/2022/05/25/user-stories/ Los Techies urn:uuid:62f54d95-0edb-9c5a-43f7-2b8c71ea1451 Wed, 25 May 2022 13:00:00 +0000 The use of User Stories has become fairly commonplace in the software industry. First introduced as an agile requirements-gathering process by Extreme Programming, User Stories arguably owe their popularity most to the adoption of the Scrum framework for which User Stories have become the de facto expression of its prescribed backlog. <p>The use of User Stories has become fairly commonplace in the software industry. First introduced as an agile requirements-gathering process by Extreme Programming, User Stories arguably owe their popularity most to the adoption of the Scrum framework for which User Stories have become the de facto expression of its prescribed backlog.</p> <p>So what exactly is a User Story? Put simply, they are a light-weight approach to expressing the desired needs of a software system. The idea behind User Stories, which was introduced as simply “Stories” in the book <em>Extreme Programming Explained - Embrace Change</em> by Kent Beck, was to move away from rigid requirements gathering processes in process, form, and nomenclature. Beck explained that the very word “requirement” was an inhibitor to embracing change because of its connotations of absolutism and permanence. At their inception, the intended form of stories was to create an index card containing a short title, simple description written in prose, and an estimation.</p> <h2 id="the-three-part-template">The Three-Part Template</h2> <p>In the late 1990’s, a software company named Connextra was an early adopter of Extreme Programming. In contrast to the distinct roles defined by the Scrum framework, XP doesn’t prescribe any specific roles, but is intended to adapt to existing roles within an organization (e.g. project managers, product managers, executives, technical writers, developers, testers, designers, architects, etc.).</p> <p>The origin of most of Connextra’s stories were from members of their Marketing and Sales departments which wrote down a simple description of features they desired. This posed a problem for the development team, however, for when the time came to have a conversation about the feature, the development team often had difficulty locating the original stakeholder to begin the conversation. This led the team to formulate a 3-part template to help address friction resulting from ambiguous requirement sources. Their 3-part template is as follows:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> As a [type of user] I want to [do something] So that I can [get some benefit] </code></pre></div></div> <p>Ironically, while the 3-part template has become the defacto standard for authoring User Story descriptions, Scrum’s “Product Owner” role, most often filled by product development specialists acting as customer proxies, along with the use of software agile-planning tools such as Confluence, Planview, Azure DevOps Boards, etc., which captures who created a given story, tends to greatly diminish the need from which the template originated. This template has since become quite the de facto standard in expressing User Story Descriptions. The irony is that many teams, in caro-cult fashion, often utilize the 3-part template where the original need to identify the author of the story to start the conversation no longer exists. Change has occurred, but because many didn’t understand the underlying impetus for the 3-part template, they were incapable of <em>adapting</em> to that change.</p> <p>Jeff Patton writes the following concerning the prevalent use of the 3-part story template in his book “User Story Mapping”:</p> <blockquote> <p>“… the template has become so ubiquitous, and so commonly taught, that there are those who believe that it’s not a story if it’s not written in that form. … All of this makes me sad. Because the real value of stories isn’t what’s written down on the card. It comes from what we learn when we tell the story.”</p> </blockquote> <p>Mike Cohn, author of many books on agile processes including “User Stories Applied” and “Agile Estimating and Planning” writes similarly:</p> <blockquote> <p>“Too often team members fall into a habit of beginning each user story with “As a user…” Sometimes this is the result of lazy thinking and the story writers need to better understand the product’s users before writing so many “as a user…” stories.”</p> </blockquote> <p>Cohn’s observations are spot on. In my experience, not only does this happen “too often”, it’s the rule, not the exception. It’s really just human nature. The moment a process becomes formulaic, teams will begin to just go through the motions without engaging their minds. This can be good for manual tasks like brick-laying, or cleaning a house, but it is detrimental to processes intended to promote communication. Sadly, many teams spend an inordinate amount of time on the trappings of things like ensuring their requirements follow the 3-part story template rather than using the story as a tool for its original intent: A placeholder for a conversation.</p> <h2 id="there-and-back-again">There and Back Again</h2> <p>While not explicitly stated, the original idea behind Stories in Extreme Programming was to facilitate a conversation, not to define an objective goal. The agile movement started as a way to address issues in the industry’s largely failing attempts to apply manufacturing processes to software development. In particular, Stories were intended to address the underlying motivation for requirements (i.e. how teams determine what to build), not to themselves be requirements.</p> <p>In many ways, today’s User Stories have become the antithesis of what Kent Beck originally intended. Sadly, much of what is marketed as “agile” today has been corrupted by traditional-minded business analysts, product managers, and marketing agencies who never really understood the agile movement fully. User Stories have, to a large extent, become a casualty of these groups. We’ve gone from requirements to stories and back again. As described by Jeff Patton, <em>“Stories aren’t a way to write better requirements, but a way to organize and have better conversations.</em>”</p> <h2 id="the-better-way">The Better Way</h2> <p>Ultimately, the question companies seek to answer is: How do we determine the features which provide the best ROI for the business? While it may seem counterintuitive to some, customers aren’t generally the best source for determining what features to build. They can be <em>a</em> source, but they aren’t generally a team’s best source. Customers are, however, the best source for determining how customers currently work, what problems they face, and what friction is involved in any current processes. Various analysis techniques can be used to solicit customer opinions on desired features, but it’s best to rely upon such techniques merely as means to distill the problems currently faced by customers. From there, stories are best created with a simple title and a description of the customer’s problem written in prose with the intent for the description to serve as a starting point for a conversation with the team.</p> <p>The best way to determine what to build is as a member of a mature agile team. The operative word here is <em>mature</em>. What makes for a mature team is a Product Owner with a background in the problem domain space, a Team Coach with deep knowledge of agile and lean processes, and 3-5 cross-functional developers weighted toward senior experience who have gone through a forming, storming, norming, and performing phase.</p> <p>User Stories shouldn’t be feature requests, but rather a placeholder for a conversation. A conversation with whom? With your team. About what? About how to iteratively solve the problems you learned from customers in small steps with frequent feedback. Product Owners should not bring requirements to a development team. There’s great power in collaboration. A smart team of 5 to 7 individuals including a subject matter expert (what the Product Owner should bring to the table) and a coach are a far better source for what features to build than just the customer or the Product Owner.</p> <h2 id="an-example">An Example</h2> <p>The following is an example story which more closely follows the original intent of Stories.</p> <p>Our scenario involves a company which provides a website allowing customers to create wedding and gift registries to send to others. In its current form, the site allows customers to pick from among existing vendors, but the company frequently receives requests from customers about specific products they’d like to see included. The current process involves the Sales team creating tickets for their Operations team to add new vendors to the site which involves updating the production database directly. Additionally, the work currently falls to one person whose job entails other operation tasks which often results in a delay to the timely fulfillment of customer requests.</p> <p>The following represents the story:</p> <table style="border: 1px solid black; background-color: white; color: black"> <tr> <td> <h2>Easily Manage Registry Products</h2> <hr style="border-top: 1px solid black" /> <h3>Description</h3> Our customers often want to add products that aren't part of our current vendor product list. This causes the sales team to constantly have to put in tickets and currently Margret is the only one that is working the tickets. We need a better solution! </td> </tr> </table> <p>Note how the description is written in prose (i.e. in normal conversational language), and doesn’t follow the wooden 3-part template. Note also, the story doesn’t prescribe <em>how</em> to solve the problem. It just provides background on what the problem is and who it affects. It isn’t <em>just</em> that the story doesn’t dictate implementation details, but that it doesn’t dictate the solution <em>at all</em>. This is the ideal starting point for most stories. It’s a placeholder for a conversation about how to solve the problem.</p> <p>From here, the team would collaborate on the story to determine the best solution that results in the smallest feature increment which adds value to the end user. Several ideas may be discussed. The system could integrate with a 3rd-party content management system, allowing people within the company without SQL experience to update content. Alternately, the team may decide that adding a feature to allow customers to add custom products directly to their personal event registry is both easier, and scales far better than solutions requiring company employees to work tickets.</p> <p>As part of a story refinement session, the team may update the story with acceptance criteria to guide the implementation:</p> <table style="border: 1px solid black; background-color: white; color: black"> <tr> <td> <h2>Easily Manage Registry Products</h2> <hr style="border-top: 1px solid black" /> <h3>Description</h3> Our customers often want to add products that aren't part of our current vendor product list. This causes the sales team to constantly have to put in tickets and currently Margret is the only one that is working the tickets. We need a better solution! <br /><br /> <h3>Acceptance Criteria</h3> <b>When the customer navigates to the edit registry view</b><br /> &nbsp;&nbsp;it should contain a link for adding custom products <br /><br /> <b>When the customer clicks the add custom product link</b><br /> &nbsp;&nbsp;it should navigate to the add custom product view (note: see balsamiq wireframe attached) <br /><br /> <b>When the customer adds a new custom product with valid inputs</b><br /> &nbsp;&nbsp;it should add the custom product to the customers registry<br /> &nbsp;&nbsp;it should display a success message in the application banner<br /> &nbsp;&nbsp;it should navigate back to the edit registry page <br /><br /> <b>When the customer enters invalid custom product parameters</b><br /> &nbsp;&nbsp;it should show standard field level error messages<br /> &nbsp;&nbsp;it should not enable the save button<br /> </td> </tr> </table> <p>While an Acceptance Criteria section isn’t mandatory, it can often be valuable for helping to frame the scope of the story, a reminder to the team of the high-level plans discussed for deferred work, and/or may serve as the team’s Definition of Done. For small teams involving just a few members, or for highly adaptive and collaborative teams, it may be enough to just just write “<em>We decided to add a feature to allow the customer to add their own products!</em>”. The team may very well take the initial story description and rapidly iterate on a solution, deciding together when they think it’s done! (Gasp!) Of course, this level of informality probably is only best suited to highly cohesive, highly functioning teams. For inexperienced to moderately experienced teams, some denotation of Acceptance Criteria would be advisable. The key point is, the story didn’t arrive to the team in the form of requirements, but as a placeholder for a conversation.</p> <h2 id="conclusion">Conclusion</h2> <p>As the adoption of agile frameworks such as Scrum have become more mainstream, a number of practices have become formulaic and adopted by teams via a cargo-cult onboarding to agile practices without truly grasping what it means to be agile. The User Story has all but lost it original intent by many teams who have done little more than slap agile labels onto Waterfall manufacturing processes. User Stories were never intended to be requirements, but rather a placeholder for a conversation with the development team. Let’s do better.</p> .Net Project Builds with Node Package Manager https://lostechies.com/derekgreer/2020/12/10/dotnet-project-builds-with-npm/ Los Techies urn:uuid:658daf41-1dc4-075d-278e-3e9da2891e09 Thu, 10 Dec 2020 07:00:00 +0000 A few years ago, I wrote an article entitled Separation of Concerns: Application Builds &amp; Continuous Integration wherein I discussed the benefits of separating project builds from CI/CD concerns by creating a local build script which lives with your project. Not long after writing that article, I was turned on to what I’ve come to believe is one of the easiest tools I’ve encountered for managing .Net project builds thus far: npm. <p>A few years ago, I wrote an article entitled <a href="http://aspiringcraftsman.com/2016/02/28/separation-of-concerns-application-builds-continuous-integration/">Separation of Concerns: Application Builds &amp; Continuous Integration</a> wherein I discussed the benefits of separating project builds from CI/CD concerns by creating a local build script which lives with your project. Not long after writing that article, I was turned on to what I’ve come to believe is one of the easiest tools I’ve encountered for managing .Net project builds thus far: npm.</p> <p>Most development platforms provide a native task-based build technology. Microsoft’s tooling for these needs is MSBuild: a command-line tool whose build files double as Visual Studio’s project and solution definition files. I used MSBuild briefly for scripting custom build concerns for a couple of years, but found it to be awkward and cumbersome. Around 2007, I abandoned use of MSBuild for creating builds and began using Rake. While it had the downside of requiring a bit of knowledge of Ruby, it was a popular choice among those willing to look outside of the Microsoft camp for tooling and had community support for working with .Net builds through the <a href="https://www.codemag.com/article/1006101/Building-.NET-Systems-with-Ruby-Rake-and-Albacore">Albacore</a> library. I’ve used a few different technologies since, but about 5 years ago I saw a demonstration of the use of npm for building .Net projects at a conference and I was immediately sold. When used well, it really is the easiest and most terse way to script a custom build for the .Net platform I’ve encountered.</p> <p>“So what’s special about npm?” you might ask. The primary appeal of using npm for building applications is that it’s easy to use. Essentially, it’s just an orchestration of shell commands.</p> <h3 id="tasks">Tasks</h3> <p>With other build tools, you’re often required to know a specific language in addition to learning special constructs peculiar to the build tool to create build tasks. In contrast, npm’s expected package.json file simply defines an array of shell command scripts:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Clean the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"restore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Restore dependencies."</span><span class="p">,</span><span class="w"> </span><span class="nl">"compile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Compile the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"test"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Run the tests."</span><span class="p">,</span><span class="w"> </span><span class="nl">"dist"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Create a distribution."</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Some author"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>As with other build tools, NPM provides the ability to define dependencies between build tasks. This is done using pre- and post- lifecycle scripts. Simply, any task issued by NPM will first execute a script by the same name with a prefix of “pre” when present and will subsequently execute a script by the same name with a prefix of “post” when present. For example:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Clean the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"prerestore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run clean"</span><span class="p">,</span><span class="w"> </span><span class="nl">"restore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Restore dependencies."</span><span class="p">,</span><span class="w"> </span><span class="nl">"precompile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run restore"</span><span class="p">,</span><span class="w"> </span><span class="nl">"compile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Compile the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"pretest"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run compile"</span><span class="p">,</span><span class="w"> </span><span class="nl">"test"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Run the tests."</span><span class="p">,</span><span class="w"> </span><span class="nl">"prebuild"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run test"</span><span class="p">,</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Publish a distribution."</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Some author"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Based on the above package.json file, issuing “npm run build” will result in running the tasks of clean, restore, compile, test, and build in that order by virtue of each declaring an appropriate dependency.</p> <p>Given you’re okay with limiting a fully-specified dependency chain where a subset of the build can be initiated at any stage (e.g. running “npm run test” and triggering clean, restore, and compile first) , the above orchestration can be simplified by installing the npm-run-all node dependency and defining a single pre- lifetime script for the main build target:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Clean the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"restore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Restore dependencies."</span><span class="p">,</span><span class="w"> </span><span class="nl">"compile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Compile the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"test"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Run the tests."</span><span class="p">,</span><span class="w"> </span><span class="nl">"prebuild"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm-run-all clean restore compile test"</span><span class="p">,</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Publish a distribution."</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>In this example, issuing “npm run build” will result in the prebuild script executing npm-run-all with the parameters: clean, restore, compile and test which it will execute in the order listed.</p> <h3 id="variables">Variables</h3> <p>Aside from understanding how to utilize the pre- and post- lifecycle scripts to denote task dependencies, the only other thing you really need to know is how to work with variables.</p> <p>Node’s npm command facilitates the definition of variables by command-line parameters as well as declaring package variables. When npm executes, each of the properties declared within the package.json are flattened and prefixed with “npm_package_”. For example, the standard “version” property can be used as part of a dotnet build to denote a project version by referencing ${npm_package_version}:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet build ./src/*.sln /p:Version=${npm_package_version}"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Command-line parameters can also be passed to npm and are similarly prefixed with “npm_config_” with any dashes (“-”) replaced with underscores (“_”). For example, the previous version setting could be passed to dotnet.exe in the following version of package.json by issuing the below command:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>```npm run build --product-version=2.0.0``` </code></pre></div></div> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet build ./src/*.sln /p:Version=${npm_config_product_version}"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>(Note: the parameter –version is an npm parameter for printing the version of npm being executed and therefore can’t be used as a script parameter.)</p> <p>The only other important thing to understand about the use of variables with npm is that the method of dereferencing is dependent upon the shell used. When using npm on Windows, the default shell is cmd.exe. If using the default shell on Windows, the version parameter would need to be deference as %npm_config_product_version%:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet build ./src/*.sln /p:Version=%npm_config_product_version%"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Until recently, I used a node package named “cross-env” which allows you to normalize how you dereference variables regardless of platform, but for several reasons including cross-env being placed in maintenance mode, the added dependency overhead, syntax noise, and support for advanced variable expansion cases such as default values, I’d recommend any cross-platform execution be supported by just standardizing on a single shell (e.g. “Bash”). With the introduction of Windows Subsystem for Linux and the virtual ubiquity of git for version control, most developer Windows systems already contain the bash shell. To configure npm to use bash at the project level, just create a file named .npmrc at the package root containing the following line:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>script-shell=bash </code></pre></div></div> <h3 id="using-node-packages">Using Node Packages</h3> <p>While not necessary, there are many CLI node packages that can be easily leveraged for aiding in authoring your builds. For example, a package named “rimraf”, which functions like Linux’s “rm -rf” command, is a utility you can use to implement a clean script for recursively deleting any temporary build folders created as part of previous builds. In the following package.json build, a package target builds a NuGet package which it outputs to a dist folder in the package root. The rimraf command is used to delete this temp folder as part of the build script’s dependencies:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"rimraf dist"</span><span class="p">,</span><span class="w"> </span><span class="nl">"prebuild"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run clean"</span><span class="p">,</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet pack ./src/ExampleLibrary/ExampleLibrary.csproj -o dist /p:Version=${npm_package_version}"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="p">,</span><span class="w"> </span><span class="nl">"rimraf"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^3.0.2"</span><span class="w"> </s Conventional Options https://lostechies.com/derekgreer/2020/11/20/conventional-options/ Los Techies urn:uuid:bddf9f6a-a398-da37-4b68-84d04a57d922 Fri, 20 Nov 2020 07:00:00 +0000 I’ve really enjoyed working with the Microsoft Configuration libraries introduced with .Net Core approximately 5 years ago. The older XML-based API was quite a pain to work with, so the ConfigurationBuilder and associated types provided a long overdue need for the platform. <p>I’ve really enjoyed working with the Microsoft Configuration libraries introduced with .Net Core approximately 5 years ago. The older XML-based API was quite a pain to work with, so the ConfigurationBuilder and associated types provided a long overdue need for the platform.</p> <p>I had long since adopted a practice of creating discrete configuration classes populated and registered with a DI container over direct use of the ConfigurationManager class within components, so I was pleased to see the platform nudge developers in this direction through the introduction of the IOptions<T> type.</T></p> <p>A few aspects surrounded the prescribed use of the IOptions<T> type of which I wasn't particularly fond were needing to inject IOptions<T> rather than the actual options type, taking a dependency upon the Microsoft.Extensions.Options package from my library packages, and the cermony of binding the options to the IConfiguration instance. To address these concerns, I wrote some extension methods which took care of binding the type to my configuration by convention (i.e. binding a type with a suffix of Options to a section corresponding to the option type's prefix) and registering it with the container.</T></T></p> <p>I’ve recently released a new version of these extensions supporting several of the most popular containers as an open source library. You can find the project <a href="http://github.com/derekgreer/conventional-options">here</a>.</p> <p>The following are the steps for using these extensions:</p> <h3 id="step-1">Step 1</h3> <p>Install ConventionalOptions for the target DI container:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$&gt; nuget install ConventionalOptions.DependencyInjection </code></pre></div></div> <h3 id="step-2">Step 2</h3> <p>Add Microsoft’s Options feature and register option types:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="n">services</span><span class="p">.</span><span class="nf">AddOptions</span><span class="p">();</span> <span class="n">services</span><span class="p">.</span><span class="nf">RegisterOptionsFromAssemblies</span><span class="p">(</span><span class="n">Configuration</span><span class="p">,</span> <span class="n">Assembly</span><span class="p">.</span><span class="nf">GetExecutingAssembly</span><span class="p">());</span> </code></pre></div></div> <h3 id="step-3">Step 3</h3> <p>Create an Options class with the desired properties:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="k">public</span> <span class="k">class</span> <span class="nc">OrderServiceOptions</span> <span class="p">{</span> <span class="k">public</span> <span class="kt">string</span> <span class="n">StringProperty</span> <span class="p">{</span> <span class="k">get</span><span class="p">;</span> <span class="k">set</span><span class="p">;</span> <span class="p">}</span> <span class="k">public</span> <span class="kt">int</span> <span class="n">IntProperty</span> <span class="p">{</span> <span class="k">get</span><span class="p">;</span> <span class="k">set</span><span class="p">;</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <h3 id="step-4">Step 4</h3> <p>Provide a corresponding configuration section matching the prefix of the Options class (e.g. in appsettings.json):</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"OrderService"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"StringProperty"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Some value"</span><span class="p">,</span><span class="w"> </span><span class="nl">"IntProperty"</span><span class="p">:</span><span class="w"> </span><span class="mi">42</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <h3 id="step-5">Step 5</h3> <p>Inject the options into types resolved from the container:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="k">public</span> <span class="k">class</span> <span class="nc">OrderService</span> <span class="p">{</span> <span class="k">public</span> <span class="nf">OrderService</span><span class="p">(</span><span class="n">OrderServiceOptions</span> <span class="n">options</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// ... use options</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <p>Currently ConventionalOptions works with Microsoft’s DI Container, Autofac, Lamar, Ninject, and StructureMap.</p> <p>Enjoy!</p> Picking a Web Microframework https://lostechies.com/ryansvihla/2020/05/27/picking-a-microframework/ Los Techies urn:uuid:b75786e5-449a-a555-4277-5555fd14eb08 Wed, 27 May 2020 00:23:00 +0000 I’ve had to use this at work the last couple of weeks. We had a “home grown” framework for a new application we’re working and the first thing I did was try and rip that out (new project so didn’t have URL and parameter sanitization anyway to do routes, etc). <p>I’ve had to use this at work the last couple of weeks. We had a “home grown” framework for a new application we’re working and the first thing I did was try and rip that out (new project so didn’t have URL and parameter sanitization anyway to do routes, etc).</p> <p>However, being that the group I was working with is pretty “anti framework” I had to settle on something that was light weight, integrated with jetty and allowed us to work the way that was comfortable for us as team (also it had to work with Scala).</p> <h2 id="microframeworks">Microframeworks</h2> <p>The team had shown a lot of disdain for Play (which I had actually quite a lot when I last was leading a JVM based tech stack) and Spring Boot as being too heavy weight, so these were definitely out.</p> <p>Fortunately, in the JVM world there is a big push back now on heavy web frameworks so meant I had lots of choices for “non frameworks” but could still do some basic security, routing, authentication but not hurt the existing team’s productivity.</p> <p>There are probably 3 dozen microframeworks to choose from with varying degrees of value but the two that seemed to easiest to start with today were:</p> <ul> <li><a href="https://scalatra.org">Scalatra</a></li> <li><a href="https://javalin.io">Javalin</a></li> <li><a href="https://quarkus.io">Quarkus</a></li> </ul> <h3 id="my-attempt-with-quarkus">My Attempt with Quarkus</h3> <p><a href="https://quarkus.io/">Quarkus</a> has a really great getting started story but it’s harder to get started on an existing project with it, it was super trivial to add, and after a couple of days of figuring out the magic incantation I just decided to punt on it. I think because of it’s popularity in the Cloud Native space (which we’re trying to target), the backing of <a href="https://developers.redhat.com/blog/2019/03/07/quarkus-next-generation-kubernetes-native-java-framework/">Red Hat</a>, and the pluggable nature of the stack there are a lot of reasons to want this to work. In the end because of the timeline it didn’t make the cut. But it may come back.</p> <h3 id="my-attempt-with-javalin">My Attempt with Javalin</h3> <p>Javalin despite being a less popular project than Quarkus it is getting some buzz. It also looks like it just slides into the team’s existing Servlet code base. I wanted this to work very badly but stopped before I even started because of <a href="https://github.com/tipsy/javalin/issues/931">this issue</a> so this was out despite being on paper a really execellent framework.</p> <h3 id="my-attempt-with-scalatra">My Attempt with Scalatra</h3> <p><a href="https://scalatra.org/">Scalatra</a> has been around for a number of years and is inspired by <a href="http://sinatrarb.com/">Sinatra</a> which I used quite a bit in my Ruby years. This took a few minutes to get going just following their <a href="https://scalatra.org/guides/2.7/deployment/standalone.html">standalone directions</a> and then some more to successful convert the routes and account for learning curves with routes.</p> <p>Some notes:</p> <ul> <li>The routing API and parameters etc are very nice to work with IMO.</li> <li>It was <a href="https://scalatra.org/guides/2.7/formats/json.html">very easy</a> to get json by default support setup.</li> <li>Metrics were <a href="https://scalatra.org/guides/2.7/monitoring/metrics.html">very easy</a> to wire up.</li> <li>Swagger integration was pretty rough, while it looks good on paper I could not get an example to show up, and it is unable to <a href="https://github.com/scalatra/scalatra/issues/343">handle case classes or enums</a> which we use.</li> <li>Benchmark performance when I’ve <a href="https://johnykov.github.io/bootzooka-akka-http-vs-scalatra.html">looked</a> around the web was pretty bad, I’ve not done enough to figure out if this is real or not. I’ve seen first hand a lot of benchmarking are just wrong.</li> <li>Integration with JUnit has been rough and I cannot seem to get the correct port to fire, I suspect I have to stop using the @Test annotation is all (which I’m not enjoying).</li> <li>Http/2 support is still lacking despite being available in the version of Jetty they’re on, I’ve read a few places that an issue is keeping <a href="https://github.com/eclipse/jetty.project/issues/1364">web sockets working</a> but either way there is <a href="https://github.com/scalatra/scalatra/issues/757">no official support in the project yet</a>.</li> </ul> <h2 id="conclusion">Conclusion</h2> <p>I think we’re going to stick with Scalatra for the time being as it is a muture framework that works well for our current goals. However, the lack of http/2 support maybe a deal breaker in the medium term.</p> Getting started with Cassandra: Data modeling in the brief https://lostechies.com/ryansvihla/2020/02/05/getting-started-cassandra-part-3/ Los Techies urn:uuid:faeba5a6-db95-bc14-4f6f-333e146885f1 Wed, 05 Feb 2020 20:23:00 +0000 Cassandra data modeling isn’t really something you can do “in the brief” and is itself a subject that can take years to fully grasp, but this should be a good starting point. <p>Cassandra data modeling isn’t really something you can do “in the brief” and is itself a subject that can take years to fully grasp, but this should be a good starting point.</p> <h2 id="introduction">Introduction</h2> <p>Cassandra distributes data around the cluster via the <em>partition</em> <em>key</em>.</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> </code></pre></div></div> <p>In the above table the <em>partition</em> <em>key</em> is <code class="language-plaintext highlighter-rouge">postal_code</code> and the <em>clustering</em> <em>column</em> is<code class="language-plaintext highlighter-rouge">id</code>. The <em>partition</em> <em>key</em> will locate the data on the cluster for us. The clustering column allows us multiple rows per <em>partition</em> <em>key</em> so that we can filter how much data we read per partition. The ‘optimal’ query is one that retrieves data from only one node and not so much data that GC pressure or latency issues result. The following query is breaking that rule and retrieving 2 partitions at once via the IN parameter.</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="k">IN</span> <span class="p">(</span><span class="s1">'77002'</span><span class="p">,</span> <span class="s1">'77043'</span><span class="p">);</span> </code></pre></div></div> <p>This <em>can</em> <em>be</em> slower than doing two separate queries asynchronously, especially if those partitions are on two different nodes (imagine if there are 1000+ partitions in the IN statement). In summary, the simple rule to stick to is “1 partition per query”.</p> <h3 id="partition-sizes">Partition sizes</h3> <p>A common mistake when data modeling is to jam as much data as possible into a single partition.</p> <ul> <li>This doesn’t distribute the data well and therefore misses the point of a distributed database.</li> <li>There are practical limits on the <a href="https://issues.apache.org/jira/browse/CASSANDRA-9754">performance of partition sizes</a></li> </ul> <h3 id="table-per-query-pattern">Table per query pattern</h3> <p>A common approach to optimize around partition lookup is to create a table per query, and write to all of them on update. The following example has two related tables both to solve two different queries</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">--query by postal_code</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="o">=</span> <span class="s1">'77002'</span><span class="p">;</span> <span class="c1">--query by id</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="p">(</span><span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">name</span> <span class="nb">text</span><span class="p">,</span> <span class="n">address</span> <span class="nb">text</span><span class="p">,</span> <span class="n">city</span> <span class="nb">text</span><span class="p">,</span> <span class="k">state</span> <span class="nb">text</span><span class="p">,</span> <span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">country</span> <span class="nb">text</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">;</span> </code></pre></div></div> <p>You can update both tables at once with a logged batch:</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">BEGIN</span> <span class="n">BATCH</span> <span class="k">INSERT</span> <span class="k">INTO</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="p">(</span><span class="n">id</span><span class="p">,</span> <span class="n">name</span><span class="p">,</span> <span class="n">address</span><span class="p">,</span> <span class="n">city</span><span class="p">,</span> <span class="k">state</span><span class="p">,</span> <span class="n">postal_code</span><span class="p">,</span> <span class="n">country</span><span class="p">,</span> <span class="n">balance</span><span class="p">)</span> <span class="k">VALUES</span> <span class="p">(</span><span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">,</span> <span class="s1">'Bordeaux'</span><span class="p">,</span> <span class="s1">'Gironde'</span><span class="p">,</span> <span class="s1">'33000'</span><span class="p">,</span> <span class="s1">'France'</span><span class="p">,</span> <span class="mi">56</span><span class="p">.</span><span class="mi">20</span><span class="p">);</span> <span class="k">INSERT</span> <span class="k">INTO</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">,</span> <span class="n">balance</span><span class="p">)</span> <span class="k">VALUES</span> <span class="p">(</span><span class="s1">'33000'</span><span class="p">,</span> <span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">,</span> <span class="mi">56</span><span class="p">.</span><span class="mi">20</span><span class="p">)</span> <span class="p">;</span> <span class="n">APPLY</span> <span class="n">BATCH</span><span class="p">;</span> </code></pre></div></div> <h3 id="source-of-truth">Source of truth</h3> <p>A common design pattern is to have one table act as the authoritative one over data, and if for some reason there is a mismatch or conflict in other tables as long as there is one considered “the source of truth” it makes it easy to fix any conflicts later. This is typically the table that would match what we see in typical relational databases and has all the data needed to generate all related views or indexes for different query methods. Taking the prior example, <code class="language-plaintext highlighter-rouge">my_table</code> is the source of truth:</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">--source of truth table</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="p">(</span><span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">name</span> <span class="nb">text</span><span class="p">,</span> <span class="n">address</span> <span class="nb">text</span><span class="p">,</span> <span class="n">city</span> <span class="nb">text</span><span class="p">,</span> <span class="k">state</span> <span class="nb">text</span><span class="p">,</span> <span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">country</span> <span class="nb">text</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">;</span> <span class="c1">--based on my_key.my_table and so we can query by postal_code</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="o">=</span> <span class="s1">'77002'</span><span class="p">;</span> </code></pre></div></div> <p>Next we discuss strategies for keeping tables of related in sync.</p> <h3 id="materialized-views">Materialized views</h3> <p>Materialized views are a feature that ships with Cassandra but is currently considered rather experimental. If you want to use them anyway:</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="n">MATERIALIZED</span> <span class="k">VIEW</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">AS</span> <span class="k">SELECT</span> <span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="k">IS</span> <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">AND</span> <span class="n">id</span> <span class="k">IS</span> <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> </code></pre></div></div> <p>Materialized views at least run faster than the comparable BATCH insert pattern, but they have a number of bugs and known issues that are still pending fixes.</p> <h3 id="secondary-indexes">Secondary indexes</h3> <p>This are the original server side approach to handling different query patterns but it has a large number of downsides:</p> <ul> <li>rows are read serially one node at time until limit is reached.</li> <li>a suboptimal storage layout leading to very large partitions if the data distribution of the secondary index is not ideal.</li> </ul> <p>For just those two reasons I think it’s rare that one can use secondary indexes and expect reasonable performance. However, you can make one by hand and just query that data asynchronously to avoid some of the downsides.</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code_2i</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code_2i</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="o">=</span> <span class="s1">'77002'</span><span class="p">;</span> <span class="c1">--retrieve all rows then asynchronously query the resulting ids</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="n">ad004ff2</span><span class="o">-</span><span class="n">e5cb</span><span class="o">-</span><span class="mi">4245</span><span class="o">-</span><span class="mi">94</span><span class="n">b8</span><span class="o">-</span><span class="n">d6acbc22920a</span><span class="p">;</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="n">d30e9c65</span><span class="o">-</span><span class="mi">17</span><span class="n">a1</span><span class="o">-</span><span class="mi">44</span><span class="n">da</span><span class="o">-</span><span class="n">bae0</span><span class="o">-</span><span class="n">b7bb742eefd6</span><span class="p">;</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="n">e016ae43</span><span class="o">-</span><span class="mi">3</span><span class="n">d4e</span><span class="o">-</span><span class="mi">4093</span><span class="o">-</span><span class="n">b745</span><span class="o">-</span><span class="mi">8583627</span><span class="n">eb1fe</span><span class="p">;</span> </code></pre></div></div> <h2 id="exercises">Exercises</h2> <h3 id="contact-list">Contact List</h3> <p>This is a good basic first use case as one needs to use multiple tables for the same data, but there should not be too many.</p> <h4 id="requirements">requirements</h4> <ul> <li>contacts should have first name, last name, address, state/region, country, postal code</li> <li>lookup by contacts id</li> <li>retrieve all contacts by a given last name</li> <li>retrieve counts by zip code</li> </ul> <h3 id="music-service">Music Service</h3> <p>Takes the basics from the previous exercise and requires a more involved understanding of the concepts. It will require many tables and some difficult trade-offs on partition sizing. There is no one correct way to do this.</p> <h4 id="requirements-1">requirements</h4> <ul> <li>songs should have album, artist, name, and total likes</li> <li>The contact list exercise, can be used as a basis for the “users”, users will have no login because we’re trusting people</li> <li>retrieve all songs by artist</li> <li>retrieve all songs in an album</li> <li>retrieve individual song and how many times it’s been liked</li> <li>retrieve all liked songs for a given user</li> <li>“like” a song</li> <li>keep a count of how many times a song has been listened to by all users</li> </ul> <h3 id="iot-analytics">IoT Analytics</h3> <p>This will require some extensive time series modeling and takes some of the lessons from the Music Service further. The table(s) used will be informed by the query.</p> <h4 id="requirements-2">requirements</h4> <ul> <li>use the music service data model as a basis, we will be tracking each “registered device” that uses the music service</li> <li>a given user will have 1-5 devices</li> <li>log all songs listened to by a given device</li> <li>retrieve songs listened for a device by day</li> <li>retrieve songs listened for a device by month</li> <li>retrieve total listen time for a device by day</li> <li>retrieve total listen time for a device by month</li> <li>retrieve artists listened for a device by day</li> <li>retrieve artists listened for a device by month</li> </ul> Getting started with Cassandra: Load testing Cassandra in brief https://lostechies.com/ryansvihla/2020/02/04/getting-started-cassandra-part-2/ Los Techies urn:uuid:c943ad17-c8c9-9027-730e-494f4fdb5d29 Tue, 04 Feb 2020 20:23:00 +0000 An opinionated guide on the “correct” way to load test Cassandra. I’m aiming to keep this short so I’m going to leave out a lot of the nuance that one would normally get into when talking about load testing cassandra. <p>An opinionated guide on the “correct” way to load test Cassandra. I’m aiming to keep this short so I’m going to leave out a <em>lot</em> of the nuance that one would normally get into when talking about load testing cassandra.</p> <h2 id="if-you-have-no-data-model-in-mind">If you have no data model in mind</h2> <p>Use cassandra stress since it’s around:</p> <ul> <li>first initialize the keyspace with RF3 <code class="language-plaintext highlighter-rouge">cassandra-stress "write cl=ONE no-warmup -col size=FIXED(15000) -schema replication(strategy=SimpleStrategy,factor=3)"</code></li> <li>second run stress <code class="language-plaintext highlighter-rouge">cassandra-stress "mixed n=1000k cl=ONE -col size=FIXED(15000)</code></li> <li>repeat as often as you’d like with as many clients as you want.</li> </ul> <h2 id="if-you-have-a-specific-data-model-in-mind">If you have a specific data model in mind</h2> <p>You can use cassandra-stress, but I suspect you’re going to find your data model isn’t supported (collections for example) or that you don’t have the required PHD to make it work the way you want. There are probably 2 dozen options from here you can use to build your load test, some of the more popular ones are gatling, jmeter, and tlp-stress. My personal favorite for this though, write a small simple python or java program that replicates your use case accurately in your own code, using a faker library to generate your data. This takes more time but you tend to have less surprises in production as it will accurately model your code.</p> <h3 id="small-python-script-with-python-driver">Small python script with python driver</h3> <ul> <li>use python3 and virtualenv</li> <li><code class="language-plaintext highlighter-rouge">python -m venv venv</code></li> <li>source venv/bin/activate</li> <li>read and follow install <a href="https://docs.datastax.com/en/developer/python-driver/3.21/getting_started/">docs</a></li> <li>if you want to skip the docs you can get away with <code class="language-plaintext highlighter-rouge">pip install cassandra-driver</code></li> <li>install a faker library <code class="language-plaintext highlighter-rouge">pip install Faker</code></li> </ul> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">argparse</span> <span class="kn">import</span> <span class="nn">uuid</span> <span class="kn">import</span> <span class="nn">time</span> <span class="kn">import</span> <span class="nn">random</span> <span class="kn">from</span> <span class="nn">cassandra.cluster</span> <span class="kn">import</span> <span class="n">Cluster</span> <span class="kn">from</span> <span class="nn">cassandra.query</span> <span class="kn">import</span> <span class="n">BatchStatement</span> <span class="kn">from</span> <span class="nn">faker</span> <span class="kn">import</span> <span class="n">Faker</span> <span class="n">parser</span> <span class="o">=</span> <span class="n">argparse</span><span class="p">.</span><span class="n">ArgumentParser</span><span class="p">(</span><span class="n">description</span><span class="o">=</span><span class="s">'simple load generator for cassandra'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--hosts'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="s">'127.0.0.1'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">str</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'comma separated list of hosts to use for contact points'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--port'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">9042</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'port to connect to'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--trans'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">1000000</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'number of transactions'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--inflight'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">25</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'number of operations in flight'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--errors'</span><span class="p">,</span> <span class="n">default</span><span class="o">=-</span><span class="mi">1</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'number of errors before stopping. default is unlimited'</span><span class="p">)</span> <span class="n">args</span> <span class="o">=</span> <span class="n">parser</span><span class="p">.</span><span class="n">parse_args</span><span class="p">()</span> <span class="n">fake</span> <span class="o">=</span> <span class="n">Faker</span><span class="p">([</span><span class="s">'en-US'</span><span class="p">])</span> <span class="n">hosts</span> <span class="o">=</span> <span class="n">args</span><span class="p">.</span><span class="n">hosts</span><span class="p">.</span><span class="n">split</span><span class="p">(</span><span class="s">","</span><span class="p">)</span> <span class="n">cluster</span> <span class="o">=</span> <span class="n">Cluster</span><span class="p">(</span><span class="n">hosts</span><span class="p">,</span> <span class="n">port</span><span class="o">=</span><span class="n">args</span><span class="p">.</span><span class="n">port</span><span class="p">)</span> <span class="k">try</span><span class="p">:</span> <span class="n">session</span> <span class="o">=</span> <span class="n">cluster</span><span class="p">.</span><span class="n">connect</span><span class="p">()</span> <span class="k">print</span><span class="p">(</span><span class="s">"setup schema"</span><span class="p">);</span> <span class="n">session</span><span class="p">.</span><span class="n">execute</span><span class="p">(</span><span class="s">"CREATE KEYSPACE IF NOT EXISTS my_key WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 1}"</span><span class="p">)</span> <span class="n">session</span><span class="p">.</span><span class="n">execute</span><span class="p">(</span><span class="s">"CREATE TABLE IF NOT EXISTS my_key.my_table (id uuid, name text, address text, state text, zip text, balance int, PRIMARY KEY(id))"</span><span class="p">)</span> <span class="n">session</span><span class="p">.</span><span class="n">execute</span><span class="p">(</span><span class="s">"CREATE TABLE IF NOT EXISTS my_key.my_table_by_zip (zip text, id uuid, balance bigint, PRIMARY KEY(zip, id))"</span><span class="p">)</span> <span class="k">print</span><span class="p">(</span><span class="s">"allow schema to replicate throughout the cluster for 30 seconds"</span><span class="p">)</span> <span class="n">time</span><span class="p">.</span><span class="n">sleep</span><span class="p">(</span><span class="mi">30</span><span class="p">)</span> <span class="k">print</span><span class="p">(</span><span class="s">"prepare queries"</span><span class="p">)</span> <span class="n">insert</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"INSERT INTO my_key.my_table (id, name, address, state, zip, balance) VALUES (?, ?, ?, ?, ?, ?)"</span><span class="p">)</span> <span class="n">insert_rollup</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"INSERT INTO my_key.my_table_by_zip (zip, id, balance) VALUES (?, ?, ?)"</span><span class="p">)</span> <span class="n">row_lookup</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"SELECT * FROM my_key.my_table WHERE id = ?"</span><span class="p">)</span> <span class="n">rollup</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"SELECT sum(balance) FROM my_key.my_table_by_zip WHERE zip = ?"</span><span class="p">)</span> <span class="n">threads</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">ids</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">error_counter</span> <span class="o">=</span> <span class="mi">0</span> <span class="n">query</span> <span class="o">=</span> <span class="bp">None</span> <span class="n">params</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">ids</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">def</span> <span class="nf">get_id</span><span class="p">():</span> <span class="n">items</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">ids</span><span class="p">)</span> <span class="k">if</span> <span class="n">items</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span> <span class="c1">## nothing present so return something random </span> <span class="k">return</span> <span class="n">uuid</span><span class="p">.</span><span class="n">uuid4</span><span class="p">()</span> <span class="k">if</span> <span class="n">items</span> <span class="o">==</span> <span class="mi">1</span><span class="p">:</span> <span class="k">return</span> <span class="n">ids</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="k">return</span> <span class="n">ids</span><span class="p">[</span><span class="n">random</span><span class="p">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">items</span> <span class="o">-</span><span class="mi">1</span><span class="p">)]</span> <span class="k">print</span><span class="p">(</span><span class="s">"starting transactions"</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">args</span><span class="p">.</span><span class="n">trans</span><span class="p">):</span> <span class="n">chance</span> <span class="o">=</span> <span class="n">random</span><span class="p">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">100</span><span class="p">)</span> <span class="k">if</span> <span class="n">chance</span> <span class="o">&gt;</span> <span class="mi">0</span> <span class="ow">and</span> <span class="n">chance</span> <span class="o">&lt;</span> <span class="mi">50</span><span class="p">:</span> <span class="n">new_id</span> <span class="o">=</span> <span class="n">uuid</span><span class="p">.</span><span class="n">uuid4</span><span class="p">()</span> <span class="n">ids</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">new_id</span><span class="p">)</span> <span class="n">state</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">state_abbr</span><span class="p">()</span> <span class="n">zip_code</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">zipcode_in_state</span><span class="p">(</span><span class="n">state</span><span class="p">)</span> <span class="n">balance</span> <span class="o">=</span> <span class="n">random</span><span class="p">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">50000</span><span class="p">)</span> <span class="n">query</span> <span class="o">=</span> <span class="n">BatchStatement</span><span class="p">()</span> <span class="n">name</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">name</span><span class="p">()</span> <span class="n">address</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">address</span><span class="p">()</span> <span class="n">bound_insert</span> <span class="o">=</span> <span class="n">insert</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">new_id</span><span class="p">,</span> <span class="n">fake</span><span class="p">.</span><span class="n">name</span><span class="p">(),</span> <span class="n">fake</span><span class="p">.</span><span class="n">address</span><span class="p">(),</span> <span class="n">state</span><span class="p">,</span> <span class="n">zip_code</span><span class="p">,</span> <span class="n">balance</span><span class="p">])</span> <span class="n">query</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">bound_insert</span><span class="p">)</span> <span class="n">bound_insert_rollup</span> <span class="o">=</span> <span class="n">insert_rollup</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">zip_code</span><span class="p">,</span> <span class="n">new_id</span><span class="p">,</span> <span class="n">balance</span><span class="p">])</span> <span class="n">query</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">bound_insert_rollup</span><span class="p">)</span> <span class="k">elif</span> <span class="n">chance</span> <span class="o">&gt;</span> <span class="mi">50</span> <span class="ow">and</span> <span class="n">chance</span> <span class="o">&lt;</span> <span class="mi">75</span><span class="p">:</span> <span class="n">query</span> <span class="o">=</span> <span class="n">row_lookup</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">get_id</span><span class="p">()])</span> <span class="k">elif</span> <span class="n">chance</span> <span class="o">&gt;</span> <span class="mi">75</span><span class="p">:</span> <span class="n">zip_code</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">zipcode</span><span class="p">()</span> <span class="n">query</span> <span class="o">=</span> <span class="n">rollup</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">zip_code</span><span class="p">])</span> <span class="n">threads</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">session</span><span class="p">.</span><span class="n">execute_async</span><span class="p">(</span><span class="n">query</span><span class="p">))</span> <span class="k">if</span> <span class="n">i</span> <span class="o">%</span> <span class="n">args</span><span class="p">.</span><span class="n">inflight</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">threads</span><span class="p">:</span> <span class="k">try</span><span class="p">:</span> <span class="n">t</span><span class="p">.</span><span class="n">result</span><span class="p">()</span> <span class="c1">#we don't care about result so toss it </span> <span class="k">except</span> <span class="nb">Exception</span> <span class="k">as</span> <span class="n">e</span><span class="p">:</span> <span class="k">print</span><span class="p">(</span><span class="s">"unexpected exception %s"</span> <span class="o">%</span> <span class="n">e</span><span class="p">)</span> <span class="k">if</span> <span class="n">args</span><span class="p">.</span><span class="n">errors</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="p">:</span> <span class="n">error_counter</span> <span class="o">=</span> <span class="n">error_counter</span> <span class="o">+</span> <span class="mi">1</span> <span class="k">if</span> <span class="n">error_counter</span> <span class="o">&gt;</span> <span class="n">args</span><span class="p">.</span><span class="n">errors</span><span class="p">:</span> <span class="k">print</span><span class="p">(</span><span class="s">"too many errors stopping. Consider raising --errors flag if this happens more quickly than you'd like"</span><span class="p">)</span> <span class="k">break</span> <span class="n">threads</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">print</span><span class="p">(</span><span class="s">"submitted %i of %i transactions"</span> <span class="o">%</span> <span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">args</span><span class="p">.</span><span class="n">trans</span><span class="p">))</span> <span class="k">finally</span><span class="p">:</span> <span class="n">cluster</span><span class="p">.</span><span class="n">shutdown</span><span class="p">()</span> </code></pre></div></div> <h3 id="small-java-program-with-latest-java-driver">Small java program with latest java driver</h3> <ul> <li>download java 8</li> <li>create a command line application in your project technology of choice (I used maven in this example for no particularly good reason)</li> <li>download a faker lib like <a href="https://github.com/DiUS/java-faker">this one</a> and the <a href="https://github.com/datastax/java-driver">Cassandra java driver from DataStax</a> again using your preferred technology to do so.</li> <li>run the following code sample somewhere (set your RF and your desired queries and data model)</li> <li>use different numbers of clients at your cluster until you get enough “saturation” or the server stops responding.</li> </ul> <p><a href="https://github.com/rssvihla/simple_cassandra_load_test/tree/master/java/simple-cassandra-stress">See complete example</a></p> <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">package</span> <span class="nn">pro.foundev</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.lang.RuntimeException</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.lang.Thread</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.Locale</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.ArrayList</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.List</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.function.*</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.Random</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.UUID</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.concurrent.CompletionStage</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.net.InetSocketAddress</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.datastax.oss.driver.api.core.CqlSession</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.datastax.oss.driver.api.core.CqlSessionBuilder</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.datastax.oss.driver.api.core.cql.*</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.github.javafaker.Faker</span><span class="o">;</span> <span class="kd">public</span> <span class="kd">class</span> <span class="nc">App</span> <span class="o">{</span> <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span> <span class="nc">String</span><span class="o">[]</span> <span class="n">args</span> <span class="o">)</span> < Getting started with Cassandra: Setting up a Multi-DC environment https://lostechies.com/ryansvihla/2020/02/03/getting-started-cassandra-part-1/ Los Techies urn:uuid:f11f8200-727e-e8cb-60b2-06dc45a16751 Mon, 03 Feb 2020 20:23:00 +0000 This is a quick and dirty opinionated guide to setting up a Cassandra cluster with multiple data centers. <p>This is a quick and dirty opinionated guide to setting up a Cassandra cluster with multiple data centers.</p> <h2 id="a-new-cluster">A new cluster</h2> <ul> <li>In cassandra.yaml set <code class="language-plaintext highlighter-rouge">endpoint_snitch: GossipingPropertyFileSnitch</code>, some prefer PropertyFileSnitch for the ease of pushing out one file. GossipingPropertyFileSnitch is harder to get wrong in my experience.</li> <li>set dc in cassandra-rackdc.properties. Set to be whatever dc you want that node to be in. Ignore rack until you really need it, 8/10 people that use racks do it wrong the first time, and it’s slightly painful to unwind.</li> <li>finish adding all of your nodes.</li> <li>if using authentication, set <code class="language-plaintext highlighter-rouge">system_auth</code> keyspace to use NetworkTopologyStrategy in cqlsh with RF 3 (or == number of replicas if less than 3 per dc) for each datacenter you’ve created <code class="language-plaintext highlighter-rouge">ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'NetworkTopologyStrategy', 'data_center_name' : 3, 'data_center_name' : 3};</code>, run repair after changing RF</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr system_auth</code> on each node in the cluster on the new keyspace.</li> <li>create your new keyspaces for your app with RF 3 in each dc (much like you did for the <code class="language-plaintext highlighter-rouge">system_auth</code> step above).</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr whatever_new_keyspace</code> on each node in the cluster on the new keyspace.</li> </ul> <h2 id="an-existing-cluster">An existing cluster</h2> <p>This is harder and involves more work and more options, but I’m going to discuss the way that gets you into the least amount of trouble operationally.</p> <ul> <li>make sure <em>none</em> of the drivers you use to connect to cassnadra are using DowngradingConsistencyRetryPolicy, or using the maligned withUsedHostsPerRemoteDc, especially allowRemoteDCsForLocalConsistencyLevel, as this may cause your driver to send requests to the remote data center before it’s populated with data.</li> <li>switch <code class="language-plaintext highlighter-rouge">endpoint_snitch</code> on each node to GossipingPropertyFileSnitch</li> <li>set dc in cassandra-rackdc.properties. Set to be whatever dc you want that node to be in. Ignore rack until you really need it, 8/10 people that use racks do it wrong the first time, and it’s slightly painful to unwind.</li> <li>bootstrap each node in the new data center.</li> <li>if using authentication, set <code class="language-plaintext highlighter-rouge">system_auth</code> keyspace to use NetworkTopologyStrategy in cqlsh with RF 3 (or == number of replicas if less than 3 per dc) for each datacenter you’ve created <code class="language-plaintext highlighter-rouge">ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'NetworkTopologyStrategy', 'data_center_name' : 3, 'data_center_name' : 3};</code>, run repair after changing RF</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr system_auth</code> on each node in the cluster on the new keyspace.</li> <li>alter your app keyspaces for your app with RF 3 in each dc (much like you did for the <code class="language-plaintext highlighter-rouge">system_auth</code> step above),</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr whatever_keyspace</code> on each node in the cluster on the new keyspace.</li> </ul> <p>enjoy new data center</p> <h3 id="how-to-get-data-to-new-dc">how to get data to new dc</h3> <h4 id="repair-approach">Repair approach</h4> <p>Best done with if your repair jobs can’t be missed or stopped, either because you have a process like opscenter or repear running repairs. It also has the advantage of being very easy, and if you’ve already automated repair you’re basically done.</p> <ul> <li>let repair jobs continue..that’s it!</li> </ul> <h4 id="rebuild-approach">Rebuild approach</h4> <p>Faster less resource intensive, and if you have enough time to complete it while repair is stopped. Rebuild is easier to ‘resume’ than repair in many ways, so this has a number of advantages.</p> <ul> <li>run <code class="language-plaintext highlighter-rouge">nodetool rebuild</code> on each node in the new dc only, if it dies for some reason, rerunning the command will resume the process.</li> <li>run <code class="language-plaintext highlighter-rouge">nodetool cleanup</code></li> </ul> <h4 id="yolo-rebuild-with-repair">YOLO rebuild with repair</h4> <p>This will probably overstream it’s share of data and honestly a lot of folks do this for some reason in practice:</p> <ul> <li>leave repair jobs running</li> <li>run <code class="language-plaintext highlighter-rouge">nodetool rebuild</code> on each node in the new dc only, if it dies for some reason, rerunning the command will resume the process.</li> <li>run <code class="language-plaintext highlighter-rouge">nodetool cleanup</code> on each node</li> </ul> <h2 id="cloud-strategies">Cloud strategies</h2> <p>There are a few valid approaches to this and none of them are wrong IMO.</p> <h3 id="region--dc-rack--az">region == DC, rack == AZ</h3> <p>Will need to get into racks and a lot of people get this wrong and imbalance the racks, but you get the advantage of more intelligent failure modes, with racks mapping to AZs.</p> <h3 id="azregardless-of-region--dc">AZ..regardless of region == DC</h3> <p>This allows things to be balanced easily, but you have no good option for racks then. However, some people think racks are overated, and I’d say a majority of clusters run with one rack.</p> MVP how minimal https://lostechies.com/ryansvihla/2018/12/20/mvp-how-minimal/ Los Techies urn:uuid:3afadd9e-98a7-8d37-b797-5403312a2999 Thu, 20 Dec 2018 20:00:00 +0000 MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP: <p>MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP:</p> <ul> <li>Mega Minimal: website and db, mostly manual on the backend</li> <li>Mega Mega: provisioning system, dynamic tuning of systems via ML, automated operations, monitoring a few others I’m leaving out.</li> </ul> <h2 id="feedback">Feedback</h2> <p>If we’re evaluating which approach gives us more feedback, Mega Minimal MVP is gonna win hands down here. Some will counter they don’t want to give people a bad impression with a limited product and that’s fair, but it’s better than no impression (the dreaded never shipped MVP). The Mega Mega MVP I referenced took months to demo. only had one of those checkboxes setup and wasn’t ever demod again. So we can categorical say that failed at getting any feedback.</p> <p>Whereas the Mega Minimal MVP, got enough feedback and users for the founders to realize that wasn’t a business for them. Better than after hiring a huge team and sinking a million plus into dev efforts for sure. Not the happy ending I’m sure you all were expecting, but I view that as mission accomplished.</p> <h2 id="core-value">Core Value</h2> <ul> <li>Mega Minimal, they only focused on a single feature, executed well enough that people gave them some positive feedback, but not enough to justify automating everything.</li> <li>Mega Mega. I’m not sure anyone who talked about the product saw the same core value, and there were several rewrites and shifts along the way.</li> </ul> <p>Advantage Mega Minimal again</p> <h2 id="what-about-entrants-into-a-crowded-field">What about entrants into a crowded field</h2> <p>Well that is harder and the MVP tends to be less minimal, because the baseline expectations are just much higher. I still lean towards Mega Minimal having a better chance at getting users, since there is a non zero chance the Mega Mega MVP will never get finished. I still think the exercise in focusing on core value that makes your product <em>not</em> a me too, and even considering how you can find a niche in a crowded field instead of just being “better”, and your MVP can be that niche differentiator.</p> <h2 id="internal-users">Internal users</h2> <p>Sometimes a good middle ground is considering getting lots of internal users if you’re really worried about bad experiences. This has it’s it’s definite downsides however, and you may not get diverse enough opinions. But it does give you some feedback while saving some face or bad experiences. I often think of the example of EC2 that was heavily used by Amazon, before being released to the world. That was a luxury Amazon had, where their customer base and their user base happened to be very similar, and they had bigger scale needs than any of their early customers, so the early internal feedback loop was a very strong signal.</p> <h2 id="summary">Summary</h2> <p>In the end however you want to approach MVPs is up to you, and if you find success with a meatier MVP than I have please don’t let me push you away from what works. But if you are having trouble shipping and are getting pushed all the time to add one more feature to that MVP before releasing it, consider stepping back and asking is this really core value for the product? Do you already have your core value? if so, consider just releasing it.</p> Collaboration vs. Critique http://aspiringcraftsman.com/2018/05/18/collaboration-vs-critique.html Aspiring Craftsman urn:uuid:d7ab1cc0-12cd-bac9-a131-795e6dd47f3d Fri, 18 May 2018 17:00:00 +0000 While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of the problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique. <p>While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of the problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique.</p> <p>To help illustrate what I’ve seen happen countless times both in catch-up design sessions and code reviews, consider the following two scenarios:</p> <h3 id="scenario-1">Scenario 1</h3> <p>Tom and Sally are both developers on a team maintaining a large-scale application. Tom takes the next task in the development queue which happens to have some complex processes that will need to be addressed. Being the good development team that they are, both Tom and Sally are aware of the requirements of the application (i.e. how the app needs to work from the user’s perspective), but they have deferred design-level discussions until the time of implementation. After Tom gets into the process a little, seeing that the problem is non-trivial, he pings Sally to help him brainstorm different approaches to solving the problem. Tom and Sally have been working together for over a year and have become accustomed to these sort of ad-hoc design sessions. As they begin discussing the problem, they each start tossing ideas out on the proverbial table resulting in multiple approaches to compare and contrast. The nature of the discussion is such that neither Tom nor Sally are embarrassed or offended when the other points out flaws in a given design idea because there’s a sense of safety in their mutual understanding that this is a brainstorming session and that neither have thought in depth about the solutions being set forth yet. Tom throws out a couple of ideas, but ends up shooting them down himself as he uses Sally as a sounding board for the ideas. Sally does the same, but toward the end of the conversation suggests a slight alteration to one of Tom’s initial suggestions that they think may make it work after all. They end the session with a sense that they’ve worked together to arrive at the best solution.</p> <h3 id="scenario-2">Scenario 2</h3> <p>Bill and Jake are developers on another team. They tend to work in a more siloed fashion, but they do rely upon one another for help from time to time and they are required to do code reviews prior to their code being merged into the main branch of development. Bill takes the next task in the development queue and spends the better part of an afternoon working out a solution with a basic working skeleton of the direction he’s going. The next day he decides that it might be good to have Jake take a look at the design to make him aware of the direction. Seeing where Bill’s design misses a few opportunities to make the implementation more adaptable to changes in the future, Jake points out where he would have done things differently. Bill acknowledges that Jake’s suggestions would be better and would have probably been just as easy to implement from the beginning, but inwardly he’s a bit disappointed that Jake didn’t like his design as-is and that he has to do some rework. In the end, Bill is left with a feeling of critique rather than collaboration.</p> <p>Whether it’s a high-level UML diagram or working code, how one person tends to perceive feedback on the ideas comprising a potential solution has everything to do with timing. It can be the exact same feedback they would have received either way, but when the feedback occurs often makes a difference between whether it’s perceived as collaboration or critique. It’s all about when the conversation happens.</p>