Los Techies http://feed.informer.com/digests/ZWDBOR7GBI/feeder Los Techies Respective post owners and feed distributors Thu, 08 Feb 2018 14:40:57 +0000 Feed Informer http://feed.informer.com/ Domain-Driven Refactoring: Extracting Domain Services https://jimmybogard.com/domain-driven-refactoring-extracting-domain-services/ Jimmy Bogard urn:uuid:89d37b07-c2cf-5314-720e-eef8a3c46846 Thu, 29 Jul 2021 14:12:19 +0000 <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/domain-driven-refactoring-intro/">Intro</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/">Procedural Beginnings</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-long-methods/">Long Methods</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-extracting-domain-services/">Extracting Domain Services</a></li></ul><p>In my last post, we looked at the <a href="https://industriallogic.com/xp/refactoring/composeMethod.html">Compose Method</a> refactoring as a means of breaking up long methods into smaller ones, each with an equivalent level of granularity. This is the refactoring in my applications I tend</p> <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/domain-driven-refactoring-intro/">Intro</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/">Procedural Beginnings</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-long-methods/">Long Methods</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-extracting-domain-services/">Extracting Domain Services</a></li></ul><p>In my last post, we looked at the <a href="https://industriallogic.com/xp/refactoring/composeMethod.html">Compose Method</a> refactoring as a means of breaking up long methods into smaller ones, each with an equivalent level of granularity. This is the refactoring in my applications I tend to use the most, mainly because it's the simplest way of breaking up a hard-to-understand method.</p><p>However, this series is about Domain-Driven Design, not just plain refactorings, so what's the difference here? With domain-driven refactoring, we're trying to refactor <em>towards</em> a domain-driven design, which defines model building blocks of:</p><ul><li>Entities</li><li>Aggregates</li><li>Services</li><li>Factories</li><li>and more</li></ul><p>These patterns aren't new, nor is refactoring to these models. In fact it's explicitly called out in the book in the "Refactoring towards deeper insight" section, with the idea that we don't start with a domain model, but rather we arrive at it through refactoring.</p><p>We last left off merely extracting methods, which is fine for procedural code, but still left a bit to be desired especially with testability. This method in particular is problematic, which uses an external web API to calculate the value of an offer:</p><pre><code class="language-csharp">private async Task&lt;int&gt; CalculateOfferValue(Member member, OfferType offerType, CancellationToken cancellationToken) { var response = await _httpClient.GetAsync( $"/calculate-offer-value?email={member.Email}&amp;offerType={offerType.Name}", cancellationToken); response.EnsureSuccessStatusCode(); await using var responseStream = await response.Content.ReadAsStreamAsync(cancellationToken); var value = await JsonSerializer.DeserializeAsync&lt;int&gt;( responseStream, cancellationToken: cancellationToken); return value; } </code></pre><p>Now a lot of folks I know would never have allowed this code to exist as-is in the first place, immediately encapsulating this web API in some kind of service from the outset. But that's no fun! Let's instead use refactoring techniques (assisted by ReSharper) to:</p><ul><li><a href="https://refactoring.com/catalog/extractClass.html">Extract Class</a></li><li><a href="https://www.refactoring.com/catalog/extractInterface.html">Extract Interface</a></li></ul><p>And a couple others to pull that method out into something else.</p><h3 id="extracting-the-class">Extracting the Class</h3><p>One of the immediate challenges we'll have moving this method is that it contains a reference to a private field. If we extract the class as-is, we'll need to make sure any private fields are also moved. Some tooling can't do all this work for us, but luckily, we're using <a href="https://www.jetbrains.com/resharper/">ReSharper</a>! I put my caret on the method I want to extract and select the "Extract Class" refactoring to have this dialog pop up:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/291338_image.png" class="kg-image"><figcaption>Extract class dialog before options chosen</figcaption></figure><p>There are a few errors I have to fill out, the class name, the private field, and visibility of the existing method (which is private). For a name, I tend to use the name of the method as a guide. If the name of the method is <code>CalculateOfferValue</code>, then the name of the class would represent this responsibility, <code>OfferValueCalculator</code>. Next, I need to fix that private field. The simple fix is to click that "Extract" link, which will extract the field into a private field in the target class. Finally, I can fix the visibility in the target class member by making it public.</p><p>Here's the dialog result after making those choices:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/291342_image.png" class="kg-image"><figcaption>Extract class dialog after filling in choices</figcaption></figure><p>And we also see that it's filled in the "Reference to extracted" above. Finally, when we perform the refactoring, our class is extracted:</p><pre><code class="language-csharp">public class OfferValueCalculator { private readonly HttpClient _httpClient; public OfferValueCalculator(HttpClient httpClient) { _httpClient = httpClient; } public async Task&lt;int&gt; CalculateOfferValue(Member member, OfferType offerType, CancellationToken cancellationToken) { var response = await _httpClient.GetAsync( $"/calculate-offer-value?email={member.Email}&amp;offerType={offerType.Name}", cancellationToken); response.EnsureSuccessStatusCode(); await using var responseStream = await response.Content .ReadAsStreamAsync(cancellationToken); var value = await JsonSerializer.DeserializeAsync&lt;int&gt;( responseStream, cancellationToken: cancellationToken); return value; } } </code></pre><p>And our constructor in the handler now uses this new class:</p><pre><code class="language-csharp">public class AssignOfferHandler : IRequestHandler&lt;AssignOfferRequest&gt; { private readonly AppDbContext _appDbContext; private readonly OfferValueCalculator _offerValueCalculator; public AssignOfferHandler( AppDbContext appDbContext, HttpClient httpClient) { _appDbContext = appDbContext; _offerValueCalculator = new OfferValueCalculator(httpClient); } </code></pre><p>And finally our usage uses this new extracted class:</p><pre><code class="language-csharp">public async Task&lt;Unit&gt; Handle(AssignOfferRequest request, CancellationToken cancellationToken) { var member = await _appDbContext.Members.FindAsync(request.MemberId, cancellationToken); var offerType = await _appDbContext.OfferTypes.FindAsync(request.OfferTypeId, cancellationToken); // Calculate offer value var value = await _offerValueCalculator.CalculateOfferValue(member, offerType, cancellationToken); </code></pre><p>So far so good! But we're not quite done - our handler class still directly instantiates/uses this concrete class, making unit testing difficult.</p><h3 id="extracting-the-interface">Extracting the interface</h3><p>To get to a testable point, we need to first extract an interface for that <code>OfferValueCalculator</code>. First, we'll put our caret in that class and select the Extract Interface refactoring:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/291346_image.png" class="kg-image"><figcaption>Extract interface dialog</figcaption></figure><p>We can leave these defaults alone, and we only want that lone method in our new interface. This refactoring creates the new interface and makes our class implement that interface:</p><pre><code class="language-csharp">public interface IOfferValueCalculator { Task&lt;int&gt; CalculateOfferValue(Member member, OfferType offerType, CancellationToken cancellationToken); } public class OfferValueCalculator : IOfferValueCalculator </code></pre><p>Finally, I don't like class names with the same name as interfaces, and instead want to name the implementation based on what makes it different/special. This implementation uses an external web API, maybe use that? I rename the class to <code>ExternalApiOfferValueCalculator</code>.</p><h3 id="refactoring-to-use-this-new-interface">Refactoring to use this new interface</h3><p>We're not quite done, because our handler class does not use this interface. First things first, on the private field's type declaration, I can bring up the ReSharper refactor dialog to select "Use Base Type Where Possible" option:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/291353_image.png" class="kg-image"><figcaption>Refactoring dialog with Use Base Type Where Possible selected</figcaption></figure><p>ReSharper brings up this refactoring when you're refactoring the Type's references. With this, ReSharper then asks what base type I want to use:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/291354_image.png" class="kg-image"><figcaption>Choosing the base type to refer instead</figcaption></figure><p>I pick the interface and now my field is converted to the interface:</p><pre><code class="language-csharp">public class AssignOfferHandler : IRequestHandler&lt;AssignOfferRequest&gt; { private readonly AppDbContext _appDbContext; private readonly IOfferValueCalculator _offerValueCalculator; public AssignOfferHandler( AppDbContext appDbContext, HttpClient httpClient) { _appDbContext = appDbContext; _offerValueCalculator = new ExternalApiOfferValueCalculator(httpClient); } </code></pre><p>You can do this refactoring on any member's type declaration - fields, properties, method parameters, etc. Finally, I need to convert that incoming constructor parameter away from <code>HttpClient</code> to <code>IOfferValueCalculator</code>.  Because this involves dependency injection configuration, it's not something a refactoring tool can fully complete itself. But first, I'll just get the class's signature correct. I highlight the set of code instantiating the value calculator and select the "<a href="https://refactoring.com/catalog/changeFunctionDeclaration.html">Introduce Parameter</a>" refactoring:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/291359_image.png" class="kg-image"><figcaption>Introduce Parameter dialog with parameters selected to be removed</figcaption></figure><p>Note that ReSharper notices there are unused parameters that can be safely removed, so I'll select that incoming <code>HttpClient</code> parameter as well. I do get a warning that ReSharper can't find usages of that constructor so things might get broken, but that's OK, we'll take care of that later, so let's proceed:</p><pre><code class="language-csharp">public class AssignOfferHandler : IRequestHandler&lt;AssignOfferRequest&gt; { private readonly AppDbContext _appDbContext; private readonly IOfferValueCalculator _offerValueCalculator; public AssignOfferHandler( AppDbContext appDbContext, IOfferValueCalculator offerValueCalculator) { _appDbContext = appDbContext; _offerValueCalculator = offerValueCalculator; } </code></pre><p>Hooray! Now I have a domain service <code>IOfferValueCalculator</code>, and a concrete implementation. The DI configuration is straightforward, I can use the <a href="https://docs.microsoft.com/en-us/dotnet/architecture/microservices/implement-resilient-applications/use-httpclientfactory-to-implement-resilient-http-requests#how-to-use-typed-clients-with-ihttpclientfactory">Typed Client pattern</a> to add this interface/implementation:</p><pre><code class="language-csharp">public void ConfigureServices(IServiceCollection services) { services.AddMediatR(typeof(Startup)); services.AddHttpClient&lt;IOfferValueCalculator, ExternalApiOfferValueCalculator&gt;(); } </code></pre><p>And with that, we've successfully introduced a domain service through refactoring.</p><p>In the next post, I'll take a look at the other methods and see where that logic could/should belong.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=oOqGzmCm1ss:W8us5wKmJB4:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=oOqGzmCm1ss:W8us5wKmJB4:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=oOqGzmCm1ss:W8us5wKmJB4:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=oOqGzmCm1ss:W8us5wKmJB4:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=oOqGzmCm1ss:W8us5wKmJB4:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/oOqGzmCm1ss" height="1" width="1" alt=""/> Domain-Driven Refactoring: Long Methods https://jimmybogard.com/domain-driven-refactoring-long-methods/ Jimmy Bogard urn:uuid:bc1d83a1-512c-a76d-1cb6-4f385694340f Thu, 22 Jul 2021 14:09:07 +0000 <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/domain-driven-refactoring-intro/">Intro</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/">Procedural Beginnings</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-long-methods/">Long Methods</a></li></ul><p>In the last post, I walked through the main, immediate code smell we saw of a <a href="https://refactoring.guru/smells/long-method">long method</a>, and I would classify this method as long:</p><pre><code class="language-csharp">public class AssignOfferHandler : IRequestHandler&lt;AssignOfferRequest&gt; { private readonly AppDbContext _appDbContext; private readonly HttpClient _httpClient;</code></pre> <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/domain-driven-refactoring-intro/">Intro</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/">Procedural Beginnings</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-long-methods/">Long Methods</a></li></ul><p>In the last post, I walked through the main, immediate code smell we saw of a <a href="https://refactoring.guru/smells/long-method">long method</a>, and I would classify this method as long:</p><pre><code class="language-csharp">public class AssignOfferHandler : IRequestHandler&lt;AssignOfferRequest&gt; { private readonly AppDbContext _appDbContext; private readonly HttpClient _httpClient; public AssignOfferHandler( AppDbContext appDbContext, HttpClient httpClient) { _appDbContext = appDbContext; _httpClient = httpClient; } public async Task&lt;Unit&gt; Handle(AssignOfferRequest request, CancellationToken cancellationToken) { var member = await _appDbContext.Members.FindAsync(request.MemberId, cancellationToken); var offerType = await _appDbContext.OfferTypes.FindAsync(request.OfferTypeId, cancellationToken); // Calculate offer value var response = await _httpClient.GetAsync( $"/calculate-offer-value?email={member.Email}&amp;offerType={offerType.Name}", cancellationToken); response.EnsureSuccessStatusCode(); await using var responseStream = await response.Content.ReadAsStreamAsync(cancellationToken); var value = await JsonSerializer.DeserializeAsync&lt;int&gt;(responseStream, cancellationToken: cancellationToken); // Calculate expiration date DateTime dateExpiring; switch (offerType.ExpirationType) { case ExpirationType.Assignment: dateExpiring = DateTime.Today.AddDays(offerType.DaysValid); break; case ExpirationType.Fixed: dateExpiring = offerType.BeginDate?.AddDays(offerType.DaysValid) ?? throw new InvalidOperationException(); break; default: throw new ArgumentOutOfRangeException(); } // Assign offer var offer = new Offer { MemberAssigned = member, Type = offerType, Value = value, DateExpiring = dateExpiring }; member.AssignedOffers.Add(offer); member.NumberOfActiveOffers++; await _appDbContext.Offers.AddAsync(offer, cancellationToken); await _appDbContext.SaveChangesAsync(cancellationToken); return Unit.Value; } }</code></pre><p>The telltale signs are code comments denoting notable sections of our method. You might see comments, or worse, regions, to try to break up a long method. However, we want self-documenting code instead of code comments (which lie), so the answer is to extract methods for these more complex sections.</p><p>So what should we extract into methods? When examining our code, we broke down the overall set of operations to 3 overall parts:</p><ol><li>Load data from database into objects</li><li>Mutate objects</li><li>Save objects' data into database</li></ol><p>We could extract each of these parts into a method, but examining the original code, we saw that our code comments weren't written as these three parts. It was broken down further:</p><ol><li>Data access to load data models we care about</li><li>Calculate offer value using external web API</li><li>Calculate expiration date based on the OfferType</li><li>Mutate our data models to assign an offer to our member</li><li>Save our data to the database</li></ol><p>And only 2-4 had comments. Initially, we can look at these and pick out the most complex part to extract a method for, which I think is #4. Let's extract a method for that part, using the text in the comment as a guide:</p><pre><code class="language-csharp">public async Task&lt;Unit&gt; Handle(AssignOfferRequest request, CancellationToken cancellationToken) { var member = await _appDbContext.Members.FindAsync(request.MemberId, cancellationToken); var offerType = await _appDbContext.OfferTypes.FindAsync(request.OfferTypeId, cancellationToken); // Calculate offer value var response = await _httpClient.GetAsync( $"/calculate-offer-value?email={member.Email}&amp;offerType={offerType.Name}", cancellationToken); response.EnsureSuccessStatusCode(); await using var responseStream = await response.Content.ReadAsStreamAsync(cancellationToken); var value = await JsonSerializer.DeserializeAsync&lt;int&gt;(responseStream, cancellationToken: cancellationToken); // Calculate expiration date DateTime dateExpiring; switch (offerType.ExpirationType) { case ExpirationType.Assignment: dateExpiring = DateTime.Today.AddDays(offerType.DaysValid); break; case ExpirationType.Fixed: dateExpiring = offerType.BeginDate?.AddDays(offerType.DaysValid) ?? throw new InvalidOperationException(); break; default: throw new ArgumentOutOfRangeException(); } // Assign offer var offer = AssignOffer(member, offerType, value, dateExpiring); await _appDbContext.Offers.AddAsync(offer, cancellationToken); await _appDbContext.SaveChangesAsync(cancellationToken); return Unit.Value; } </code></pre><p>The extracted method becomes:</p><pre><code class="language-csharp">private static Offer AssignOffer( Member member, OfferType offerType, int value, DateTime dateExpiring) { var offer = new Offer { MemberAssigned = member, Type = offerType, Value = value, DateExpiring = dateExpiring }; member.AssignedOffers.Add(offer); member.NumberOfActiveOffers++; return offer; } </code></pre><p>The original method I think is <em>cleaner</em>, but we're missing something here.</p><h3 id="extracting-and-composing-methods">Extracting and composing methods</h3><p>One common mistake I see is to only extract methods for individual complex sections, resulting in a method with different levels of abstraction. If we're trying to make our code more intention-revealing, it's much more difficult if the reader has a single method that zooms in and out at different conceptual levels.</p><p>That's where I like to use the <a href="https://industriallogic.com/xp/refactoring/composeMethod.html">Compose Method</a> refactoring, where I break a long method down into multiple methods, each as a single intention-revealing step:</p><pre><code class="language-csharp">public async Task&lt;Unit&gt; Handle(AssignOfferRequest request, CancellationToken cancellationToken) { var member = await _appDbContext.Members.FindAsync(request.MemberId, cancellationToken); var offerType = await _appDbContext.OfferTypes.FindAsync(request.OfferTypeId, cancellationToken); // Calculate offer value var value = await CalculateOfferValue(member, offerType, cancellationToken); // Calculate expiration date var dateExpiring = CalculateExpirationDate(offerType); // Assign offer var offer = AssignOffer(member, offerType, value, dateExpiring); await _appDbContext.Offers.AddAsync(offer, cancellationToken); await _appDbContext.SaveChangesAsync(cancellationToken); return Unit.Value; } </code></pre><p>I usually leave the data access ones alone because they're already (mostly) intention-revealing, but that's just a preference. I can remove the code comments because they're not adding anything at this point. Each method now does exactly what the code comment was describing:</p><pre><code class="language-csharp">private async Task&lt;int&gt; CalculateOfferValue(Member member, OfferType offerType, CancellationToken cancellationToken) { var response = await _httpClient.GetAsync( $"/calculate-offer-value?email={member.Email}&amp;offerType={offerType.Name}", cancellationToken); response.EnsureSuccessStatusCode(); await using var responseStream = await response.Content.ReadAsStreamAsync(cancellationToken); var value = await JsonSerializer.DeserializeAsync&lt;int&gt;(responseStream, cancellationToken: cancellationToken); return value; } </code></pre><p>And ReSharper tells me I can make that switch statement into a switch expression, so let's do that:</p><pre><code class="language-csharp">private static DateTime CalculateExpirationDate(OfferType offerType) { DateTime dateExpiring = offerType.ExpirationType switch { ExpirationType.Assignment =&gt; DateTime.Today.AddDays(offerType.DaysValid), ExpirationType.Fixed =&gt; offerType.BeginDate?.AddDays(offerType.DaysValid) ?? throw new InvalidOperationException(), _ =&gt; throw new ArgumentOutOfRangeException(nameof(offerType)) }; return dateExpiring; } </code></pre><p>That extra variable doesn't seem to be adding much, so let's apply <a href="https://refactoring.com/catalog/inlineVariable.html">Inline Variable</a> to <code>dateExpiring</code>:</p><pre><code class="language-csharp">private static DateTime CalculateExpirationDate(OfferType offerType) =&gt; offerType.ExpirationType switch { ExpirationType.Assignment =&gt; DateTime.Today.AddDays(offerType.DaysValid), ExpirationType.Fixed =&gt; offerType.BeginDate?.AddDays(offerType.DaysValid) ?? throw new InvalidOperationException(), _ =&gt; throw new ArgumentOutOfRangeException(nameof(offerType)) }; </code></pre><p>Inline Variable, the inverse of Extract Variable, is what I like to call a "defactoring", where we're applying a reversal of a refactoring. I do this constantly in my refactoring, trying one direction, seeing if it looks OK, and if not, apply the inverse refactoring.</p><p>Now that I'm complete, I might just stop here, as my original method is concise and readable. However, this series isn't just about making procedural code easier to understand, it's about behavioral models with DDD. In the next post, we'll look at these individual methods to see where we can extract our logic to a more DDD, object-oriented location.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=aql9--uhcCs:MiQ4QQwfdCU:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=aql9--uhcCs:MiQ4QQwfdCU:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=aql9--uhcCs:MiQ4QQwfdCU:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=aql9--uhcCs:MiQ4QQwfdCU:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=aql9--uhcCs:MiQ4QQwfdCU:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/aql9--uhcCs" height="1" width="1" alt=""/> Domain-Driven Refactoring: Procedural Beginnings https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/ Jimmy Bogard urn:uuid:ee68109f-88fc-9bce-ac22-146c3f3150b1 Thu, 15 Jul 2021 14:33:42 +0000 <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/domain-driven-refactoring-intro/">Intro</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/">Procedural Beginnings</a></li></ul><p>As part of the red-green-refactor TDD process, the second step of making the test pass means we write the simplest (but still correct) code that can possibly work that flips our test from red to green. For me, this means in an OO</p> <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/domain-driven-refactoring-intro/">Intro</a></li><li><a href="https://jimmybogard.com/domain-driven-refactoring-procedural-beginnings/">Procedural Beginnings</a></li></ul><p>As part of the red-green-refactor TDD process, the second step of making the test pass means we write the simplest (but still correct) code that can possibly work that flips our test from red to green. For me, this means in an OO language creating dumb, procedural code. I hardcode as much as I can, I don't introduce any abstractions or indirections, and try to make it as boring as I can.</p><p>This means if I'm using some variant of MVC, everything goes in the controller action. Web APIs, put it in the action. Razor Pages? Your "OnGet/Post" methods. Even when using <a href="https://github.com/jbogard/MediatR">MediatR</a>, I just dump everything in the handler.</p><p>I do this because I don't want to assume the complexity of my code before it is written. I want to let the code smells guide my way into where the code should belong. If I try to "prefactor" the code, I'm probably wrong (I'm usually wrong, anyway).</p><p>In this sample application, I've got a persistence/data model representing a loyalty rewards system with members and offers:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/7/151412_image.png" class="kg-image"><figcaption>Data model of Member, Offer, and OfferType classes</figcaption></figure><p>Our data model has a Member, an Offer, and an OfferType. We have associations between our classes, and even the names of those associations represent the nature of the relationships, tracking to the terms the business uses.</p><p>When we assign an offer to a Member, we use the OfferType to calculate the expiration date, and the Value comes from an external API. The MediatR handler to do this looks like:</p><pre><code class="language-csharp">public class AssignOfferHandler : IRequestHandler&lt;AssignOfferRequest&gt; { private readonly AppDbContext _appDbContext; private readonly HttpClient _httpClient; public AssignOfferHandler( AppDbContext appDbContext, HttpClient httpClient) { _appDbContext = appDbContext; _httpClient = httpClient; } public async Task&lt;Unit&gt; Handle(AssignOfferRequest request, CancellationToken cancellationToken) { var member = await _appDbContext.Members.FindAsync(request.MemberId, cancellationToken); var offerType = await _appDbContext.OfferTypes.FindAsync(request.OfferTypeId, cancellationToken); // Calculate offer value var response = await _httpClient.GetAsync( $"/calculate-offer-value?email={member.Email}&amp;offerType={offerType.Name}", cancellationToken); response.EnsureSuccessStatusCode(); await using var responseStream = await response.Content.ReadAsStreamAsync(cancellationToken); var value = await JsonSerializer.DeserializeAsync&lt;int&gt;(responseStream, cancellationToken: cancellationToken); // Calculate expiration date DateTime dateExpiring; switch (offerType.ExpirationType) { case ExpirationType.Assignment: dateExpiring = DateTime.Today.AddDays(offerType.DaysValid); break; case ExpirationType.Fixed: dateExpiring = offerType.BeginDate?.AddDays(offerType.DaysValid) ?? throw new InvalidOperationException(); break; default: throw new ArgumentOutOfRangeException(); } // Assign offer var offer = new Offer { MemberAssigned = member, Type = offerType, Value = value, DateExpiring = dateExpiring }; member.AssignedOffers.Add(offer); member.NumberOfActiveOffers++; await _appDbContext.Offers.AddAsync(offer, cancellationToken); await _appDbContext.SaveChangesAsync(cancellationToken); return Unit.Value; } } </code></pre><p>It's quite a lot of code but it's not <em>huge</em>. We do see that there are some code comments to split out some of the major sections. Roughly, the method breaks down to:</p><ol><li>Data access to load data models we care about</li><li>Calculate offer value using external web API</li><li>Calculate expiration date based on the OfferType</li><li>Mutate our data models to assign an offer to our member</li><li>Save our data to the database</li></ol><p>Generally speaking, steps 1 and 5 are fairly universal in functions/features in our applications. We almost always need to load data, work with data, then save the data. However, it's steps 2-4 that the real business logic lie.</p><p>So what's wrong with this? The biggest code smell I see is Long Function (Method), where, well, my method is too long to fit on a screen. It's long enough to introduce code comments to split up the main parts.</p><p>In the next post, we'll look to see how to tackle this long method, eventually refactoring our code into the domain model. </p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=3_UIxqZvPiE:SCFtbLmip80:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=3_UIxqZvPiE:SCFtbLmip80:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=3_UIxqZvPiE:SCFtbLmip80:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=3_UIxqZvPiE:SCFtbLmip80:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=3_UIxqZvPiE:SCFtbLmip80:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/3_UIxqZvPiE" height="1" width="1" alt=""/> Domain-Driven Refactoring: Intro https://jimmybogard.com/domain-driven-refactoring-intro/ Jimmy Bogard urn:uuid:02aab579-2837-1356-7fee-e094b5dc375a Thu, 03 Jun 2021 13:02:22 +0000 <p>A common theme in domain-driven design are design patterns. When you start learning about DDD, you'll be presented with many code-level concepts such as:</p><ul><li>Aggregates</li><li>Entities</li><li>Value Objects</li><li>Repositories</li><li>Specifications</li><li>Factories</li></ul><p>With all of these patterns (and more), it's easy to fall into the trap of blindly apply patterns and</p> <p>A common theme in domain-driven design are design patterns. When you start learning about DDD, you'll be presented with many code-level concepts such as:</p><ul><li>Aggregates</li><li>Entities</li><li>Value Objects</li><li>Repositories</li><li>Specifications</li><li>Factories</li></ul><p>With all of these patterns (and more), it's easy to fall into the trap of blindly apply patterns and complex layering with the hope that if we just apply a little more structure, our code will be "clean".</p><p>In practice, this is rarely the case. No amount of prescriptive guidance results in automatically "clean" code, or as our goal should be, highly cohesive code. So if structure can't save us, what do we do?</p><p>Over the years, the only truly effective technique I've found for creating highly cohesive, maintainable code, is <strong><a href="https://refactoring.com/">refactoring</a></strong>. But it seems to be a lost art - very few developers I meet or teams I work with treat refactoring as an essential, required skill.</p><p>Instead of trying to put code into prescriptive buckets, what I've found worked best is to start with the simplest solution that can possibly work, and once it works, examine the code for code smells. For a given code smell, try a refactoring, and look to see if that refactoring resulted in more cohesive, understandable code. If this sounds a lot like the TDD steps of Red-Green-Refactor - it's no coincidence.</p><p>Back to DDD, with its prescriptive (and constraining) patterns, I've <em>also</em> found that the best domain models aren't ones that try to follow highly specific rules, but ones that use code smells and refactoring techniques to build them.</p><p>In this series, I'll do exactly that - start with a set of code that's boring, procedural, and has an intentionally anemic domain model. And with standard code smells and refactoring techniques, I'll step-by-step refactor the procedural code, pushing the behavior into the domain, resulting in an actual behavioral domain model that arose naturally through refactoring instead of artificially through layers and rules.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=JXXw8hqbqTA:fzAh6nrrqoY:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=JXXw8hqbqTA:fzAh6nrrqoY:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=JXXw8hqbqTA:fzAh6nrrqoY:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=JXXw8hqbqTA:fzAh6nrrqoY:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=JXXw8hqbqTA:fzAh6nrrqoY:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/JXXw8hqbqTA" height="1" width="1" alt=""/> Local Development with Azure Service Bus https://jimmybogard.com/local-development-with-azure-service-bus/ Jimmy Bogard urn:uuid:61477e4f-d861-7afc-abf8-ed36d00a1663 Tue, 06 Apr 2021 16:13:39 +0000 <p>For teams new to Azure Service Bus, one of the first questions you have to answer is "how do I develop against this?" And it turns out the answer isn't that straightforward - because it's currently impossible to run Azure Service Bus outside of Azure. There's no install. There's no</p> <p>For teams new to Azure Service Bus, one of the first questions you have to answer is "how do I develop against this?" And it turns out the answer isn't that straightforward - because it's currently impossible to run Azure Service Bus outside of Azure. There's no install. There's no Docker image. There's no emulator. This is a bit alien to most developers, as typically, we can run the entire production workload locally. My team uses:</p><ul><li>Azure SQL</li><li>Azure Functions</li><li>ASP.NET Core (and .NET Core Workers)</li><li>Node.js (SPA)</li><li>Docker</li><li>Kubernetes</li><li>CosmosDB</li><li>Azure Service Bus</li></ul><p>Of all these, <em>only</em> the last one on the list can't be run external to Azure. This poses a challenge to the team - we want to ensure that environments are isolated from each other, and each developer's environment won't interfere with another's:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61243_image.png" class="kg-image"><figcaption>Developer resources isolated</figcaption></figure><p>When developers are forced to use shared resources, we introduce contention. We can no longer reason about <em>our</em> code versus someone else's. This is especially problematic for stateful resources, where I have to worry about someone else changing the state I'm working with.</p><p>Back to Azure Service Bus, what we <em>don't</em> want to happen is have multiple developers all reading and writing from the same queues. I don't want one developer producing messages, and another developer consuming them, while the original developer is left waiting for those messages to arrive.</p><p>To develop effectively, we need figure out a strategy to ensure isolated developer environments. Unfortunately for us, there's not really any guidance on the Microsoft documents on how to do this. Compare to Azure Functions, that has a <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-develop-local">top-level documentation page</a> on doing exactly this:</p><figure class="kg-card kg-image-card"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61249_image.png" class="kg-image"></figure><p>Or <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator?tabs=cli%2Cssl-netstd21">Azure Cosmos DB</a>:</p><figure class="kg-card kg-image-card"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61250_image.png" class="kg-image"></figure><p>But there's nothing on the Azure Service Bus docs on local development. There's lots of guides on how to develop against Azure Service Bus, but nothing around strategies for isolating individual developers from each other.</p><p>I wound up <a href="https://github.com/Azure/azure-service-bus/issues/223">creating a GitHub issue for this</a>, just to understand what exactly the local development story is:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61253_image.png" class="kg-image"><figcaption>No local development story</figcaption></figure><p>There is no documented story, so what are our options? As the top-level resource bucket in <a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview#namespaces">Azure Service Bus is a "namespace"</a>, we can:</p><ul><li>Isolate developers in a common namespace</li><li>Create a namespace per developer</li><li>Abstract our usage of Azure Service Bus and run something else locally</li></ul><p>These each have their benefits and drawbacks, however. Namespaces aren't free, and each namespace tier has different features, so we have to consider costs and capabilities.</p><h3 id="isolate-developers-in-common-namespace">Isolate Developers in Common Namespace</h3><p>In this approach, we pick an appropriate tier of Azure Service Bus (probably Standard or Premium), and create isolated resources for each individual developer (or machine). We have to use a naming convention to separate these resources, perhaps using the machine name or developer's login:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61416_image.png" class="kg-image"><figcaption>Service bus resources per developer</figcaption></figure><p>One advantage to this approach is we can use a higher Service Bus pricing tier (perhaps even Premium, around $700/mo), ensuring our developers use the exact same resource during development as production.</p><p>The downside is all this additional naming complexity and the pain of setting all this parallel infrastructure up. You'll have to roll your own convention and setup, and ensure your code can handle topics, subscriptions, and queues having somewhat dynamic names.</p><h3 id="namespace-per-developer">Namespace per developer</h3><p>Instead of having a single namespace that everyone shares, we can instead create isolated namespaces per developer:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/6152_image.png" class="kg-image"><figcaption>Namespace per developer</figcaption></figure><p>Then inside each namespace, the names of our individual resources (topic/subscription/queue) will be consistent across all environments:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61456_image.png" class="kg-image"><figcaption>Consistent resource names inside namespace</figcaption></figure><p>This is a big benefit - we can ensure that our application code only really needs to change a connection string but the rest of the code deals with consistent resource names.</p><p>The downside is creating this service bus resource in the first place. When you create a Service Bus namespace, you have a few decisions:</p><ul><li>What Azure subscription to use</li><li>Namespace name</li><li>Pricing tier</li></ul><p>Even the first question can be difficult. MSDN and Visual Studio subscriptions come with Azure credits. I've found that unless they need to, teams don't activate those subscription credits. You can use a corporate subscription as well, but typically the process for creating Azure resources isn't open to developers. If it is, it's usually in a sandbox/dev Azure Resource Group.</p><p>Next, you have the namespace name. This name isn't just unique to the subscription - it has to be unique across <em>all</em> Azure Service Bus namespaces. You'll need to come up with a naming convention that's distinct per developer/environment/company.</p><p>Finally, the pricing tier. Premium namespaces are not cheap - ~$700/mo. However, Standard namespaces aren't expensive - ~$10/mo. And that cost is shared across an entire subscription. For multiple developers on a single subscription, that $10/mo is a one-time charge.</p><p>Not insurmountable challenges, but again, we don't have to do this with a SQL database or CosmosDB database.</p><h3 id="transport-abstraction">Transport abstraction</h3><p>Finally, we can abstract our transport and swap our usage locally. We can do this at the protocol (AMQP) level, or at a higher-level abstraction using an OSS library like NServiceBus. With this, our application code can swap messaging transports to a local version when we're developing locally:</p><pre><code class="language-csharp">if (context.HostingEnvironment.IsDevelopment()) { endpoint.UseTransport&lt;RabbitMQTransport&gt;() .ConnectionString(context.Configuration["Messaging:ConnectionString"]) .UseConventionalRoutingTopology() .EnableInstallers(); } else { endpoint.UseTransport&lt;AzureServiceBusTransport&gt;() .ConnectionString(context.Configuration["Messaging:ConnectionString"]); }</code></pre><p>Hopefully, the transports are close enough (or it's just pure AMQP protocol), and I'm not relying on too many transport-specific features in local development. Or, transport-specific limitations that you only encounter when running real-life workloads. But this can work, and we can even use containerized resources.</p><p>I asked the question on <a href="https://twitter.com/jbogard/status/1374702041625411587">Twitter on these options</a>, and got roughly an even split between these:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/4/61331_image.png" class="kg-image"><figcaption>Twitter poll</figcaption></figure><p>With a "single shared namespace" winning out (which I thought was the hardest one to do).</p><p>So what should we do? It really depends, since we don't have LocalStack available for us here. It would be super nice of course to have a local/emulator. And if you don't, I guess you can get into the sea:</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Final final note: cloud / paas / saas services that don&#39;t have a local development emulator/simulator story can also get in the sea. AWS should be paying LocalStack peeps mountains of cash.</p>&mdash; Damian Hickey (@randompunter) <a href="https://twitter.com/randompunter/status/1378758138728439811?ref_src=twsrc%5Etfw">April 4, 2021</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </figure><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=tB4VB8cVDxc:9Pa27RNooWo:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=tB4VB8cVDxc:9Pa27RNooWo:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=tB4VB8cVDxc:9Pa27RNooWo:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=tB4VB8cVDxc:9Pa27RNooWo:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=tB4VB8cVDxc:9Pa27RNooWo:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/tB4VB8cVDxc" height="1" width="1" alt=""/> Taming the WSL 2 Resource Monster https://jimmybogard.com/taming-the-wsl-2-resource-monster/ Jimmy Bogard urn:uuid:c1fc9957-0a1d-4508-c380-3bf5934c5572 Fri, 05 Mar 2021 15:00:18 +0000 <p>I've switched all of my Docker and *nix usage to use WSL 2 for a while now, and largely have been free of any issues/problems. That is, until recently, when I've been working on speeding up my team's production Docker image builds (another story). This work heavily involved modifying</p> <p>I've switched all of my Docker and *nix usage to use WSL 2 for a while now, and largely have been free of any issues/problems. That is, until recently, when I've been working on speeding up my team's production Docker image builds (another story). This work heavily involved modifying <code>Dockerfile</code> commands, trying out different approaches for building Angular apps to compare performance.</p><p>Evidently restoring and building Angular apps is one of the most resource intensive operations one can perform in technology, because after a day or two of this effort, my machine ground to a halt when <em>not</em> performing a build. Ultimately, I hit two limits:</p><ul><li>RAM</li><li>Disk space</li></ul><p>WSL 2, much like Chrome, will consume all available resources until your machine begins to smoke. Unlike Chrome, WSL 2 will <em>not</em> release these resources, sometimes even after a reboot.</p><p>The RAM issue is rather straightforward to solve, just create a <code>.wslconfig</code> file in the root of your user's home directory and <a href="https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig">configure it</a> to not consume so much:</p><pre><code>[wsl2] memory=16GB </code></pre><p>My machine has 32GB so half to WSL 2 seemed reasonable. Disk space, unfortunately, was much harder.</p><h3 id="taming-wsl-2-disk-space">Taming WSL 2 Disk Space</h3><p>I first saw a problem when trying to build some Nx-based example apps:</p><pre><code>error Could not write file "C:\\src\\nrwl\\nx-examples\\yarn-error.log": "ENOSPC: no space left on device, write" </code></pre><p>Uh oh. Maybe this is a mistake. Let's check the space:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCACLAWwDASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iiigAooooAKKKKADqMGqUlmUmWWIZAYErV2isqlGFRe90KjNx2MLXPDEGvhvOvb223rEGW3ZMHy33oSGU5w1ZI+HNqLgTjXdXEo6MPs/H/kGuzorW+jj0ZFk3c4Y/DCwNn9kOuaz5H2YWmzfB/qh0X/AFX69feus0/TV08TYubi4aV/Md52BJO0L2A7KKu0VEKcIK0El6FNt7hRRRViCiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAorKuL240qWRriOa6tpGJhaKPc6sekZA7E9G/A+psacl75bzX7jzZTuEKYKwjsoPVj6n16cUAQzeItDtrtrSfWdOiuVbaYXukVwT22k5zWnWBoUUc7eIIZo0kik1GRXRxlWBjTIIPUVzdhqeoQWGmaRpqXrwyvdvDLaeSZPs8cu2MIZm2Ywy88/KBgc5Gypp6LfT8rk3tf1PQ6iNzbi6W1M8QuGQyCLeN5UHBbHXGSOa5WPWtV0hbO514yRWzJcRusoi3Fk+eN22EgMUVwQDjI4A6Uxtd1XSrWSe/Yzyw6M19JDsUfvdxO3IA4A+X8M9aXsn/Xz/wAh3/r7v8zsqK5TUbjV9B0Br9tSm1G6n8qJIjDEsaSSOq7kwF4G7gOx7ZPU1CkvitbS+SODUGG2IwyXn2Tzwd2JAgjbYcJyu4DnrkUey0vdC5jsaKy9Au0u9KEq31xd7XZWe6hEUqEHlXUKoBH0H9a5/T9cv5Ne0xo7jULjTdReVVkuoII4mAQupiC4lA+XHzg5BznoSlTd2uwc2lztKZLNFAFMsqRh2CKXYDLHgAe59K4vTNV1eOHQNQvdSa4TUZHhltxAioo2OyspA3bvkGeSDk4AqGSTUb/SfD+sXGptIl7fWsrWhiQRxhmBUIQA2RwDuY554HGKVF3s31sDlZN+VzvaYs0TzSQrKjSxgF0DAsoPTI7ZwfyrJ1+8uYW02ytZ/s8t/deQZwgYxqEZyVByNx2YGQRz0Nc5PLqWk3PiRotQM90gsVjuJYk3YZiMOFAUnk9AOMd+amNPmV/63sDdnY7c3NuLpbUzxC4ZDIIt43lQcFsdcZI5qWuXu5NYgurjTbXUpZrhdMaeKWWKPc02/jICgY/hxjp781RvfFl3La3Oo6ZmS3WK2hijCqczzsvJyR91WTgsBlufUNUnK1v61t+IOVnr/W3+Z21FcckvitbS+SODUGG2IwyXn2Tzwd2JAgjbYcJyu4DnrkVu6BdpeaWJFvri8xIys9zCIpVIPKOoVcEdOg/rSlTsr3Hc0FnheaSFJY2liwZEDAsmemR2zTEvbWQQmO5hYT58krID5mBk7fXgdqxdIsbXV/DA+2xidbyVridSxw7b8hWx1AwFweCFx0rJ04BPhJaTrhZLWzFxCR/C6fMuPxGPxxTUFr6pf19wXZ29FIDkA4x7U3e3/PJ/zH+NZDH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0Uze3/PJ/zH+NG9v+eT/mP8aAH0UVWWa4k3FIYtoZlG6UgnBI/u+1NJsTaRQl8MaVNczzSR3J+0P5k0QvJhFIcAHdGH2EEAAgjB71ZvdF0+/gt4poCq2xzA0EjQtFxj5GQgqMcYB6VY33f/ADxh/wC/x/8AiaN93/zxh/7/AB/+Jq7z7/iK6/pFWTQtNm02PT5rcy20cglVZJXZt4bdksTuJz1yTnJzVh7C1ku2upIVaVoTAxYkgoTkqR0607fd/wDPGH/v8f8A4mjfd/8APGH/AL/H/wCJpe93/ELr+kZ8HhjSILSa0W2kktpo/KaGa4klRU/uqrMQg9lx0HoKVfDmnLbSwf6YyylSzvfTtINpyu1y+5cEnoR1NX993/zxh/7/AB/+Jo33f/PGH/v8f/iad59/xFePb8BtlYW2n2v2a2jKx5LHc5dmJ5JZmJLE+pJNULXwvpFlcwXEFtIJLckwb7iRxDkEEIGYhVwfugAdOOBjR33f/PGH/v8AH/4mjfd/88Yf+/x/+Jpe93/Ed1/SK6aNp8cNjElviOxfzLcb2+RsEZ688MeuetVV8K6MlwkwtXzHN58SG4kMcUmd25ELbVOSegHU+taW+7/54w/9/j/8TRvu/wDnjD/3+P8A8TTvNdfxBtPf8hl/p1rqlr9nu4y8e4ONrlGVgchlZSCpHqCDVOLw1pMMVxGtsx+0mNp2eaR3kKHKlmLEkj1z6DpV/fd/88Yf+/x/+Jo33f8Azxh/7/H/AOJpLmSsn+IXT/4YPscH277b5f8ApHleVvyfuZzjHTrVWPQdLi0yfTVs4/sdwztLC2WDFjk9f8jjGMVa33f/ADxh/wC/x/8AiaN93/zxh/7/AB/+JotLv+IcyKC+HNOW2lg/0xllKlne+naQbTldrl9y4JPQjqavWVjb6dbC3tkKxgliWcuzEnJLMxJYn1JJpd93/wA8Yf8Av8f/AImjfd/88Yf+/wAf/iaHzPd/iK8e34EGn6edPa5RJd1tLKZY4ivMRY5cZzyCxJAwMZPbGIX0O3GnWmmW58nT4GXdDyxdVOQu4npkDOc5GR3q7vu/+eMP/f4//E0b7v8A54w/9/j/APE0e9e9x3RPRUG+7/54w/8Af4//ABNG+7/54w/9/j/8TS5WHMieioN93/zxh/7/AB/+Jo33f/PGH/v8f/iaOVhzInoqDfd/88Yf+/x/+Jo33f8Azxh/7/H/AOJo5WHMieioN93/AM8Yf+/x/wDiaN93/wA8Yf8Av8f/AImjlYcyJ6Kg33f/ADxh/wC/x/8AiaN93/zxh/7/AB/+Jo5WHMieioN93/zxh/7/AB/+Jo33f/PGH/v8f/iaOVhzInoqDfd/88Yf+/x/+Jo33f8Azxh/7/H/AOJo5WHMieioN93/AM8Yf+/x/wDiaN93/wA8Yf8Av8f/AImjlYcyJ6KZBJ50EcuMb1DYz0yKfUtW0KTvqFFFFABRRRQAUUUUAFFFFABUFp/qW/66yf8AoZqeoLT/AFLf9dZP/QzVfZZP2kT0UUVJQUUUUAFFVLnVNPs2KXN7bxOOqvIAfy61Qu/FmjWU0kMt0TLGxVkSNiQR1HTFaRo1J/DFswqYqhT+OaXzRtUjMFRmPQDNVtOv4dTsY7y33eVJnbuGDwSP6VDqupQ6fCFlDFpVYIB0JA9e3WpcWpcrWppGcZRU09GU47+/mt5LxNvkqxG0gduvvj8a1rW4W7tY504Djp6HoR+dYOm38cWlPb3CPGp3fPx0Pf8AWtTRzarp6RWtyJ0Usd2Rnlif60mUiHWdTn0+80aGFY2W+vvs8hcEkL5Uj5HPXKDrnvSar4m0nRbhYL64kSQx+a4jt5JRFHnG+QopEa5z8zYHB54NN8QaPd6qNPksb2G0ubG7FyjT25mRvkdMFQ6Ho+c57Vn33hjVLy4luF1m2hmvbNbPUCtiSJEUuQ0QMn7tsSOPmLjpxxyv6/ArT8P1/wAjXOvaf/aq6Ykk0t0QCfIt5JEjyMje6qVTI5G4jPHrWVqfjC3jsrg6eJPtlvcW0ckN5aSwnZLOse4BwpYfewwyMirFr4fu9N1W4m07UIYtPuyr3FtJbF5N6xiPMcm8bflVOCrcg+tYOn/Dc2cFwr6ha+ZN9l3ywWHlNIYJvN3yHeTJI/RmJ98dqatdX2F0NzXL/V11/S9K0m4sbc3UFxNJLd2rz48sxgABZExnzDzk9Kr2Hi1Ggjk1We2spokuhdW6xvJloHCu6Px8vIO0qSd4xyDm5rOjajeaxp+p6ZqNraT2kU0JFzaNOrrIUJ4WRCCPLHc9aqP4NRRbmO6jmaK2u4pPt1uJluHuGRmeRQVBGVPyjAwcDAFJbff+th6aC3PjC3g1HTywkh06e2uZpWurSWKUNG0SqFRwG5MhGNpLHGPfS1XVJLe0svsar9pv50hgE8bALkFmLLweEVjt4PGOK53/AIV/JL/Zk1zrDNd6cZ5LeRIn2RSOyMm1XkYhF2Y2FiCGIBUYA3dYsr2a20y5RVub2wuUnKRgRiX5WR9oZuPldiAW645709NP66i6fJ/8Ar2l/resXMl1YPYW2mw3DwCO4geSW48tirsGDqIxuDAfK/QHvipxr6S6jpxt3jm0+9kmtVfYyus8e8nr1XEbjp1APIPEUGhanp99N/ZmrQQ6bPOZ5LWaz8x0Zjl/LcOoUMST8yvgk9uAPp2pTappUd1KbmGzmmu3uiqoGJDpHEFBzlVk+9jHyDueEugPd/16f8E6CiiigAooooAKKKKACiiigAooooAKKKKACiiigCCy/wCPC3/65L/IVPUFl/x4W/8A1yX+Qqeql8TJj8KCiiipKCiiigAooooAKKKKACoLT/Ut/wBdZP8A0M1PUFp/qW/66yf+hmq+yyftInoooqSgooooA8y8WWkEvia8d9RtYWOzKOspI+Rf7qEfrVLWLK3fW79m1S0Qm5kJVllyvzHg4Qj8q6jXvC0Gp6tcXS6okczbd0JTcVwoA6HI6elY2q+Fr251K6uoJIWSaZ5ApJBGST6V79DE0uWC57WX+XkfGYzL8Tz1JKldOV15rXXRnY+E41i8M2aJMkyjfh0DAH52/vAH9Ks6n5b+WjhWU5yp59Kx9EiubHRLe0mO1o92VU8csT/Wn3lvcTbHt7jypEzwRlWz6/lXj1rOtJp9WfU4ROOGppqzUVp8jDvbGBbiZImkiQkgqrnB/A5FJ4ZhSPxDbKZpG2tJtQYVB8rfwjiqN3qM8ckj3Vq6DJO5ORj19RUvhEXd34oS4jtZvssTOZJWG0LlWA68nn0qJc1tTWKjfQf8TNe1XRr+wXTr2W3WWJi4TGCQeOtcL/wnHib/AKDE/wCn+FejfEDwlqniS9spNPWLbDGysZH28k9vyrjv+FW+JP7tp/3+/wDrV5tZVXN8t7H12W1MvjhYqty82t7pX3fkZX/CceJv+gxP+n+FH/CceJv+gxP+n+Fav/CrfEn920/7/f8A1qP+FW+JP7tp/wB/v/rVny1/M7vbZT/c+5f5GV/wnHib/oMT/p/hR/wnHib/AKDE/wCn+Fav/CrfEn920/7/AH/1qP8AhVviT+7af9/v/rUctfzD22U/3PuX+Rlf8Jx4m/6DE/6f4Uf8Jx4m/wCgxP8Ap/hWr/wq3xJ/dtP+/wB/9aj/AIVb4k/u2n/f7/61HLX8w9tlP9z7l/kZX/CceJv+gxP+n+FH/CceJv8AoMT/AKf4Vq/8Kt8Sf3bT/v8Af/Wo/wCFW+JP7tp/3+/+tRy1/MPbZT/c+5f5GV/wnHib/oMT/p/hR/wnHib/AKDE/wCn+Fav/CrfEn920/7/AH/1qP8AhVviT+7af9/v/rUctfzD22U/3PuX+Rlf8Jx4m/6DE/6f4Uf8Jx4m/wCgxP8Ap/hWr/wq3xJ/dtP+/wB/9aj/AIVb4k/u2n/f7/61HLX8w9tlP9z7l/kZX/CceJv+gxP+n+FH/CceJv8AoMT/AKf4Vq/8Kt8Sf3bT/v8Af/Wo/wCFW+JP7tp/3+/+tRy1/MPbZT/c+5f5GV/wnHib/oMT/p/hR/wnHib/AKDE/wCn+Fav/CrfEn920/7/AH/1qP8AhVviT+7af9/v/rUctfzD22U/3PuX+Rlf8Jx4m/6DE/6f4Uf8Jx4m/wCgxP8Ap/hWr/wq3xJ/dtP+/wB/9aj/AIVb4k/u2n/f7/61HLX8w9tlP9z7l/kZX/CceJv+gxP+n+FH/CceJv8AoMT/AKf4Vq/8Kt8Sf3bT/v8Af/Wo/wCFW+JP7tp/3+/+tRy1/MPbZT/c+5f5GV/wnHib/oMT/p/hXrHgHULvUtBjub24knmdMszn/ppIOnbgD8q88/4Vb4k/u2n/AH+/+tXpPgvRrzQ9GSzvUVZEXGVYEH53bj8GFb4dVVL37nlZvUwMqCWH5b36K2lmb1l/x4W//XJf5Cp6gsv+PC3/AOuS/wAhU9dsviZ83H4UFFFFSUFFFFABRRRQAUUUUAFQWn+pb/rrJ/6GanqC0/1Lf9dZP/QzVfZZP2kT0UUVJQUUUUAeD/FHStXfxnPf6es6su0q8LFW/wBWgyMcnkHpWDpvxJ8VaM4hu3W9ReCl0nzj/gQwc/XNfSU1vDcx7J4klT+66giuc1XwHoupoQYfLPbjcB+B/oRQBzugeOY9ajuEm0+Wzu7bb5kTtuHzAkEHj09KuT61K/CfKPam6R8NU0m6uWhuUSKbbnGWPGex6dfU11Fr4b062wWjMzDvKcj8ulCvbUSTtqct9r8232uqspTbjGSW9Pet/wAK6Vd6baMblVQyImEzkrgHr+ddAqqihUUKo6ADAFLQFjP1yW6g0W7ntJlhliiaTc0e/opPHOM+5z9KfqJu/wCy5DZBzckDbs27uozjd8ucZ61Pd2yXlnPayFgk0bRsV6gEYOKjls2d5HjvLmFnjVBsKkJgk5AYEZOcE46AUDMW5v5l0UTw6hexzwmRGE0MZzID92Uqu1VHqCowc5rT1K5mRbS3t5FSW7l8sS4B2DaWJAPBOFOPr36Uw6Iv2X7Ot/eJG+7zsFMzbjltxK8d/u7f5VaurCG6t44SXjMTBonjOGjYcAj/AOvkHoc0AYF1qUkMUUUWq37n7cIZn+xjzkHlltoXy+eQDkL0NXF1O6i8J3V+D9onhSUocKGIUnaXHABAALDg9RjPFTNoCMob7dd/aPtAuDcfu95YJsHGzbjbx92rC6TANKn0/fKUnVxLISN7F87m6Yzz6Y9qOg1a6KdvrItYUhvEvGkjtftM80qx/Ivzfe2HGfl6KD296iXxjphsmuCSNsixhPOiOSQSPmD7BwD1YdPpnSk0m2medpd7rPbC2dSeCg3enOfmNQHRS8EccupXsjwuHhmby98RAI4wmDkEg7gaH/X3iX9fd/mSjWbVtHTUl3vE+AiphmZidoUYOCd3HXHvjms3UNauZDa21rbXlvNJdCGcYiLxjYW4yxU5A688A9DitabT1udP+yXFxNIchhMdocMDlW4AGQQO2OKhj0aFXilknnmmSfzzK5Xc7bSgBwAMAHoAP50dQ6DdbeeGyMsF7JBKoIijRFYzSfwr8wOfoMH3qKee4j1q2h+2yr5xw8TRKsajYfuMVyz7gOMnjPHFWrzTWuruK5S+ubaSNGQeUIyMEgn76t6DpR/Zm+7jnmvbmZYm3pC+wIGxjdwoPc8E456dMCBkGnpdjVblX1G4ubeFAhEyRj94eTgqi9Bj/vr2qqb+WQi9m1Ce1gN0YI4YrcSIdr7PnO0kFiDzlQMjv12ra2S1R1RmbfI0hLHJyxz+Q6D2FUZNDhklYi5uEt3lEz2ylfLZwwbPKlhyAcAgfmcnYDLsNTu5dXVGvZ2DXk8LRSwhIgiltux9g3NwONx78ccdRWXDocUN0JTdXMkazvcJA5TYkjEkkYUMfvHgk9a1KOgPcKKKKACiiigAooooAKKKKAILL/jwt/8Arkv8hU9QWX/Hhb/9cl/kKnqpfEyY/CgoooqSgooooAKKKKACiiigAqssNxHuCTRbSzMN0RJGST/e96s0U02hNJkG27/57Q/9+T/8VRtu/wDntD/35P8A8VU9FPmYuVEG27/57Q/9+T/8VRtu/wDntD/35P8A8VU9FHMw5UQbbv8A57Q/9+T/APFUbbv/AJ7Q/wDfk/8AxVT0UczDlRBtu/8AntD/AN+T/wDFUbbv/ntD/wB+T/8AFVPRRzMOVEG27/57Q/8Afk//ABVG27/57Q/9+T/8VU9FHMw5UQbbv/ntD/35P/xVG27/AOe0P/fk/wDxVT0UczDlRBtu/wDntD/35P8A8VRtu/8AntD/AN+T/wDFVPRRzMOVEG27/wCe0P8A35P/AMVRtu/+e0P/AH5P/wAVU9FHMw5UQbbv/ntD/wB+T/8AFUbbv/ntD/35P/xVT0UczDlRBtu/+e0P/fk//FUbbv8A57Q/9+T/APFVPRRzMOVEG27/AOe0P/fk/wDxVG27/wCe0P8A35P/AMVU9FHMw5UQbbv/AJ7Q/wDfk/8AxVG27/57Q/8Afk//ABVT0UczDlRBtu/+e0P/AH5P/wAVRtu/+e0P/fk//FVPRRzMOVEG27/57Q/9+T/8VRtu/wDntD/35P8A8VU9FHMw5UQbbv8A57Q/9+T/APFUbbv/AJ7Q/wDfk/8AxVT0UczDlRBtu/8AntD/AN+T/wDFUbbv/ntD/wB+T/8AFVPRRzMOVEG27/57Q/8Afk//ABVG27/57Q/9+T/8VU9FHMw5UMgj8mCOLOdihc464FPooqW76lJW0CiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooA/9k=" class="kg-image"><figcaption>Disk space with very little free</figcaption></figure><p>Free space hovered around 1GB, but would get much lower then larger as the pagefile struggled to keep up. That's not good.</p><p>I ran the normal "disk cleanup" which would net me a whopping 80MB back:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/3/51357_image.png" class="kg-image"><figcaption>Disk cleanup giving me 80MB back</figcaption></figure><p>When this fails, I reach for the trusty <a href="https://windirstat.net/">WinDirStat</a> tool to see what we see. To my surprise, there was one ENORMOUS file filling up my drive:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAGKAyADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD2HU9R1myvJTbacbu2AGwIPmJ2MSCfQnaM4459aa2tayqhv7AchmAAE3KgkAk/L6emf60ms+KLfSbmS2mmtopAqsgklAZs+xq3d67FYvi4iYfuFlwrDcSW245wOOMnNNqwlJS2NaisL/hKLV7u2tooJ2knfaNy4CgMFJbuDz0I7duKq/8ACXrHbNJNZsW2lk8kllY4ztzgfN2IAPNIqx09FY0mvqJYFSNVjlRX3zPtwCCcd/mwOBnrn0rQsrsXkDSBNuHZOuc4NAizRRRQAUUUUAFFFFABRRRQAUdqKKAMnUb2+s7m3aC1e5gKEyJEhLA8c5+meByf5UINa11pPn0M7cvwWKnqNo6Y+7nnoT+Nc/rnxXh0PXLvSzo08xtn2eYswAbgHpj3rP8A+F2Qf9AC5/7/AI/wrdYaq1dIXMjshqmu3lo/laT9kuA8QXzG3Bgwyx5AwB09fbpUia1qxDl9CkXazKB5mS2N2P4e+0c9PmHNcT/wuyD/AKAFz/3/AB/hR/wuyD/oAXP/AH/H+FP6tV7BzI7a21jVpbi3+0aM0Fu4JkYMzsvy8ADA7+vatuKTzVVwrKCOjrgivLv+F2Qf9AC5/wC/4/wrqPDXjZPEmi6jqUenvbfY8jZJJnfhd3XHFROjUgryQJplw6lrtvcSK2li4iknkWJlbbsQHCk4B69e3Q+1WbfV717ry7jSZ44fL3eYuW+bcRtxjsBnPvxkc1DP4hkt52jktV4EgUiTOSp4J4yFxzn8OabB4nWSCWR7SSPY6RqTkKzMDzkgfLkcEZzwcVkM3Ubf82CMgHBGCKfUNrL9otYZ9u0yxq5Gc4yM1NQAVm3txfpb3pgt8NGFML43mT1G0c5purasdNKqqwuzRvJteTacKPp64H41Vi8SrM8aLZzBnuhAAyOCqkA7zleBk4GevrQAtpq+py3kMEukSpG7EPM5I2DGeRjB9ODjrz0zuLyo+lc7aeKGudXgsvsgAlLAsGOUwAQDx19R2ravLg2kCyKgZfMRGy2NoZgCfwzmgDHm1bWre+miGj/aIfMIidWK/Lg8ng9wB/kVbstR1Ce58m60w267GbzA5ZcjGB0HqfyqhN4pkR5wlohWNpFBaUAnbjn6EZP/AHyOpFbsV5bXDBI5kdiM7QecUAWKxNTutYgvJDZ23mwCNAgAGS5LbvwAA9OtaV1dfZpbZSq+XK5RnZsbPlJB/THbrWCni1hOI5LFtpZV8xCzIAZNpYkLgDGSM+h+tAGlpWo3t9v+2aa9kVCkBm3ZyTxnA5GB0z1puo399ZX6eTZy3NuYsssSHIbPUn+g569q0Yrm3uGZYpUkK9QpziszWNdOlzrGsKzZQMyh8MoLYBx3HX8RQBQGta8Yos6K+/zG8zAPKjPAyOD056eme21p93c3Ubm6tDbOr7QuchhtByCQPXH4Vjy+K5EtYpUshIXLhgsh+TaM5PGeAeeOxAya09G1U6vbyytbPA0cm3Y/Ucd/f2oAoPrWrpGiJo0sshjJZyCqq+7AGMcjHOc0+W/1nCNHaAyGKJvJ8s7GJxvy/wDCRyAMfn2H8TLFG7tbFljuPJdo9zKo343EgemT+AHfiOfxO0EzI8EKqvVmlPyHGcEbeoyCfbPpQAkniLUooZp5dDkigi3ZeSTBx2OMdOOcZ7YzWno+oXOpW8ktzYPZ4YeWrtkspAOeg/z+VNXU5m0y7uTarvgGRGJMh/kVuuPfHGelVbvxC9sTH9iZpfPaIfMdgAzhi2PlzgdcDnrxQBu0jfdP0rmz4rwYsWTuHCfdySCxAJIAOF5OCepGOK6BLiCaR4klRnX7yg8jtQBkapqGsW94UsdO8+EIo3sON5PPQ5xt9uvepbDU72RSb7TpYSVXb5algTzntkdvzrXooAT+IfSlo75ooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArK1Ce5S9hEFxaxxIczJKuWcegPbjJzz2rVrmtX/AOQjJ/uj+VAFi6uLuNtTks08y4S1iMShc5OX7d6ik1nWI3Uporyx/d5Yqx+YDd04BHOMZ/WrEV1BY3N7dXMqxQRWkLO7HAUfPzUK+K9N1HTru40m+guDbbfMIOQoJ7/kfyoAlt9W1F7lEudJmhTGXZDvHf2Ht+davms8HmRxncUJVHG057A+lcPH48nku/JMQSMrxK8eF3c5B5/2al1Dxhe2un/aomtSNygMynbgnHrVwg6klCO7Buxspqeqj7On2CWTdtaWUxFdvzfMoX2HfPOQeea24phMoYJInOMSLg1wy+M71rbzTf6Wpx90qf8A4qq2j+PNS1LVJrVjZFY03Bo1Prj1966p4CtCDm1oiedPQ6CHWNciVBLpMlwHBYyBTHsIx8u3nPJODxnFS3Ou6lAscn9jybWYJjcchicYPy9vUZHPpzXOL8RblSizWbo0igoBFnOc+/Tjr70rfEC+MYkit4iplaPDcM20KeAM8/N09q4izootY1vAWTRGMpz0chQc4xnB6dz37ZrdtpHmtopJYjFIyAtGTnae4rgo/iBczPMscCkxw+Z8yYBIOCOv09+aQ/ESRdu+3lXcwVcwcnPfhv8AOaBHodFeeR/EC6uonNpbb3Vo1w8eAdzbeCCc4/L3qWw8dXV9dNbiEI6gtuMZ2kA49etAHfUVyf8AwkV9/wBMf++T/jR/wkV9/wBMf++T/jQB1f8AGB7VWnmuknKRWwePymbzN3Rx0GO+a53/AISK+zn9z/3z/wDXrPvPGepwXotYoLZ2MYcFsjJ3Yxx/OgDSt9d1wWcYfSTPcxjdMoypIKttAGODlR17EHvW3pt5c3kBe5s2tXD7dhJORgHIJA9fTtXGv48vkW4YQQbYphGHIO1hgnPHfA4A9QKn0vxlf6lbNM0MMRWQoBtJ6f8A66AN+7u9Wgur2OO382PyXe2dI+A4XhW55yfT19qNL1e6mg23tlKJowA7RoTliM9O3GM/WsC98YanbTpDDFayO8ZYBsjkEADjPXPWq83jvUIZZP3Ns0MbhWkUk4BGQcD8eO3HrQB6CvVvrWRqtxq0NyTYwCSFYM4xyZCwAH0A5rL/AOEive3k8/7B/wAaP+Eivv8Apj/3yf8AGgDU0zU7+8kkS80t7MKpIZm3ZIbHpjpzwTWxXHQeKL2bVvsRFuVCqWIB3DJxjH0712NABRRRQAVlPPcDWFAuLUWg+VoyP3hb1z7cDHua1a5+WznbU2cR/KZs5yOmaAJdTj059RjW60q2uZGCjzJIQxAJI6kH+dXZ7x4ZSoChFIHIq2yqZEJUEjoSKzbqe2/tEQSW7MzFQWBOOenTisK8ako2puzLg4p+8JqGvW+m3ZgnRsAKxIIztIbnHoCoBP8AtCmWfifT764jhh8/MjlFZ4yoJ+b1/wB01C2sTP4xXSfItXtTblzIXPmK3BxgjBB9OvetJ720ikulAG+1TzZSF4XIz19cVsmiC5tXdu2jdjGcc0KqoMKoA64ArDh8W6VLAshkkRiu4oYySPQcdz2pJPGGkLDK8c0kxi6qkZ54B6kY7/ofSmBvUUUUAFFFFABRRRQAUUUUAFFFFAGP4j1m28OaNLqdxbGZI2UFUwCcnHf61xX/AAuDR/8AoEXP/fSf416VJFHMmyWNZF/usuRXE6l408OaXqdxYTaWzSwPsYpCmCfbmom+XVuxaq0qa/eL8TL/AOFwaP8A9Ai5/wC+k/xrsrDWoL/T7e8S0CrPGJArMMgH8K5f/hYfhj/oEy/9+I/8alHxN8PqoVbC5AHAAjTj9ahVYdZGc8TQfw6fM6v7fH/z7J+Y/wAKs20sd1HIhhUL0ZeCGB9a4z/hZ2gf8+N1/wB+0/xrqNA1m113T/tlpC8URYrhwATj6VcZxk7JkxqQk7RZJqepnTPJC2/mCRXxhwuCqlgMe+Dz0FULvxIkU00JsjNCiK0jhxgK3ByMdjxjrweOObOta3Boxgaa2kl8zcFKDJz6D6/yqtN4nsYpWSOAzbYkkmZCuEVs9c88cfnVlm5A8clvG8W3y2QFNvTGOMVJTInSWFJIyCjKGUr0IPTFPoAz9U1P+zQhMSsrI53NJtAIAwOh69KrHxAiz28EluVkmuDBjeDtHZvxyB+fpVnU9QSxUCS3aZWjkfAK/wAIzjn1H8qiGqL56xPbKHNz5KASKSQB98D0HIPpg0AU7XxVHc6rBYi2wZi3zeZnbgZGRjr6jPFamq3w0+z85oPOQuqMu4Dg8d+v071l2viW0uNVgs0tsPMWIfI+XgEE+59K1dSvFsbdJnhMi+aqnGPkycbvwoDqYsPi+F7a4k+xupgVTt3fey23jIHAyOfrjpV/TdZjvbs2sUMahA2dsmSoBAHy4HBz+hqnF4itJlmY2G2W3RRtYqMEttKZ7Y4z2/TM2keIbLVb1YYLZ42KFwzpg4zwPqRzQBPrutQ6RFD5sBmaZiFXoOBnk4x1wPx9qpJ4piuLbd9kBDXCwbWlGCGH3jx06jFXta1y10aOH7QjSNOSqxrjJwCe/wCA/Gq6a3atG1ytmCyyxxEgjJDAFSPzxQBb0zVHv5JUeGOMxqrHZLvxuzgHgY4wfoRTdV1j+y3QG38wOhYMZAoyCBj9ePfA70zSNatdVnmjhi2TRxo0w4ypJYbT3yNv054qzq1/Hplg11LEZEUgEDtz1oAoXGuyWln9r+wxtl3Dqk3zbV4DcqOPrjHvVzSNWTVoZJUj2KrYHzZyMd+OD7VSXX7aWzN5a2YmidpVfDKGIjB5x3BA79MitOxvFvBOUjCrHKU3BgQ3A54/I+4oAy5PE9vCjtLDgLceQ5VsgZbAJOPTJ/DHemzeKEglKPBGoHVmnA28ZweMgjIyOcA+2Ke/iOwtkbzYhGPP8hhkYyW2gn8Mt9AfbLJvENtHKyvZqN3JZnUDkcA+/IyOwNAF5dVc6bd3RtSGtxny/MB3DYr9en8X6VWuvEiWreW1qzTGdoVTeADjOCW6DOB19fY1aXVF/s26ufsjqLYfNESuSNit646N+lVrrxFa2x8p7d2meZoFiyuXxnnOcc46deRQPoVm8X26mL/R2IkCAYbnLEA8Y4UZ6nqRiugSeGSRo0ljZ1+8qsCR9awW8WWERiJiIEoRVIIGWYjjnHAznPfBxzW8jws7BGQuPvBSMj60CKSarv16XTPs5ASISecW4JPbGP61o0mBnOOaWgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArnNWjdtQkKoxGByB7V0dVJ/8AWmgDPTi5vflz/osAwQDnl/WpNOMbi7SaCNI1wzK0QXHXrgDPT0puf9M1DgH/AEeEYIBzy/rUmly7Wut6eUEwSCmCBz379KAM6y1e0v8AWvsUWlIIcsBMyjsM9McfnmrPiL7JpujyXP2OzfDqMTRAryQKfF4ltZZUjWC4BeUR5KjAyxUHOehIPT0q3q862+ntI6xMoZRiUZXrUVJcsHLaw1VhSftJq6WrRyC6nYNB5httCBx0NuP8am8K6na6rrV1af2fpSLHDv3W8IDH5gOfatMXlqYtxTTh7eUKXQtSiutWngjis12RbswJhuo6+1c0MVzTUeZ6lPNcFUXs4U/els+33Fc+JdIi3ifTWRhvZQIgMoMYb5scEH3A7mnLr+mBJ5JdMZFjDMv7pcsoGeM4yT6DPr05qR/ElubeR7jTmluFyhRFDKRuIA3HqMjHHepE8XQSbsadfnacOQikDjP97kdfrj3FdhJXi8Q6Yy5fSJVJLbUEKk4BPv1OAcdRnnjmp5dY0pJYIH0uXdcRhh+4XC5bbg89c5zjIA56UXHilrO7ljuNOn8tZTGpjG4kbgob0AJz34FdHQI5K117RxAkaaHNEI0XciwJtTIBxnPqwH1z6Gpf7f0kg40uWQgZ+SBenPPJ6YHXoegya6iigCuLGzIz9lh/79ij7BZ/8+sP/fsVYooAr/YLP/n1h/79isXWL630q6jiXTYJg6gkhRlfmAyRjp/M8V0VZWoanPb3n2W1SCSXyg+12IOSwUdB0PP0xzQBSuryG2hmm+wWjQiUJHIF4I27iTx16AY6kgVPoc9vrFnJO+nQwlJWQLsB4H4e9Mn8RMkdy6WrCOKZY1kkVgHUru3DjnJGBjqSPWreharJq9nJPJB5JWVkC5PQUAV9Wkh04Zj062kXyy5ZsDoyjHTk4Y4GckjFUZNYtkmnI0qE20M3lvLgZwRw2ACSOv4YPetnUL+ezcCK3EqmMvncQchkGOh7MT+FUrvW7m1laX7PHLYhwplRm3DIznbjnGR0oAu3sNtbWM1xHYwSsi7ghUDP6Gsb+1rYXFzEdOtAIhJtbsxU8Hp09cZwa6kHIzWVd6hOktzFHBC0sJQwh2P7zdwe3GCcd+tADtPfT7yONlgt1n2B2QKMr+ladY2ja2+qzMjQrHtj3Mu75kO7AHvx3HcYrZoAKKKKACqn/Lx/wKrdVP8Al4/4FQBZP3lprRRmZXMaFgOGK804/fWg/wCsX6H+lAGHNZ6S3iNLh7RvtqPkSqxAzt6kA46cVYu4rW+uBFc2yOC4XcGKsQOgJHUe3StHyYvtHmeWm/Gd23n0rPElm+qGH955qvu+8MZHP1xWFZVXb2b66lQUFe6K17e6As1zpt2kSbUCuFXqH5I+XkdAT0HIqIa3oUa7FgT7Gihnm8obFOGIyD8x4QkEA9vWr0kNle3USz2ULlZS6sQM7gDz056fy9Krr/wj1pF5TwWsSqXgHmxDkKfm5PUZ9epPrV06saivFg4uO5csNc0/U5fKtZi0hQyBGQqSoYru5HTIrRrLsrnSW1GSCyjhF0EzL5cYUqODhuM/xA4rUrQkKKKKACiiigAooooAKKKKACsHXZNC0mNbzUNOhkM0m0sLdXYtjPOfpW9VW/s7O8g/023jnjjy4Ei7scdamSbWm4adTj/+En8H/wDQJX/wCSj/AISfwf8A9Alf/AJKzv8AhNfhx/z6p/4An/Cj/hNPhx/z6p/4An/Cl9WxXb8GHuGifE/g8DP9kr/4BJXWaXJaSWimytlgiIDBVQKORnoK4L/hNPhx/wA+sf8A4An/AArs/Dms6VrmnG60gH7MreXzGU5HsaqNGtBXqL8LBp0JNW1G0sZIUubZpjKsm3CqeAuSOT3HaobvWbK0bypLR2jCK5ZUUqqtnk88DPX61Y1O/s7Se3S5gMskgfy8Kp7cjJI6iqs+oaZbfNNpzoJkRnbyFIAbOA+OmOevrTA2YmRoUaPb5ZUFdvTHbFPpkWzyk8sKI9o2hcYA7YxT6AKGpajDYeWJYHlMgYLtC9eOOT3z9KqLqtnNLbu9iQzzbImYISM4+cc8DLAeuc+lWdUu7S3VI7u2acShgqiMNu6fLz3Oagt9R0m9MM0EKSj7R5McqxAhX27sg9hyefU0AV7XXNMm1WC2js9s8xZkk2pxkA5ODkbh+Namp3UNnaCe4hMsauvAUEg54PPpWXa6posmqwQQ2apcylmjkESA9Ad3ByNw/GtXUZraC3SS6h82MSoANobDE4B59DQBkJq2kyWUl5/Z+FiRRgRKchnKBRjPQqatWWq2U+pi2itTHMyuyybVw4zyQQeQeufXrzVdtS0c2skj2J8uCIEp5akFS5XGM4OCD/SptL1HSp7vybK08l9pwwhCjB+YdPUfN/PmgC5qV9DZpEs8PmLMSgyVxnaTzk45xisyTVtJt0NxNY+TtlSNmeNRs3LuDH0wDj1q/rE9jbwJJfWn2iP5gP3avt4JPB9cf41UW80VIlufsqx75o4lBjALOwUqQPow59j6UAWdI1W01FStpC0UaxqUyoAZTkcY7Agip9Uvbews/OuYzIm9VChd3OeD+HXPbFV9KvrC6uJ0tLUwSqqmTMYUnOccjrj+vFWtSmt4LJpLuHzoQy7l2Bscjkg9h1z2xQBlz69ZWSedLYyLJMZOFVW37cL1BxycAetXtJv7DUI53sFQIku1yoUBmKgk8fXH4Gqk1zpmxZJdOL+Y0igNGh5UHOMnGCAenFaGnXEFxHIYbZ7fawDKyBTnaD29iB7YxQBnNrWlW8bpJAsIafyXQqgBYvjnnvkt9AaZNq2nCVlfTdxf5izLHhsjjJJ68gYPIyM8U9tR0WKN45LeONZJ/KdGjX5n37cn8ST9AT2pk19o7TMsmmb2k+csYEw2RwSSe/A59RnjmgC8mowf2ddXAtJUjtx88RVQxGxW6Zx0I6+lV7nXNOg3RyQM0skzQCHYpaQjPTnkfryOOasJf2f9m3U620iwwj95GYgCRsU9O/ykflVe51TSYwwmt90jzNF5Xkgu7DPIHfofzHrQPoQf8JHpEBiJhCeaERWCqMkkfLyR0zn064rdRIVdmREDn7xUDJ+tYX9r6FEYz9njXzAiowiXBJIwg9xkH0HWt1IIY5GkjhjV2+8yqAT9aYim+s2qasumnzPtDMABt45Vm6/RTWhTPKj37/LTfnO7aM5xj+RNPpAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFVJ/9aat1Un/1poAzZpGjur0qYwWit1JkXcoBZ+oyKZ4f1J7uH7RLCkCPEXKeTsdcHvyaL3UF0n+1b90DLBaROQTgYy9Z3hbW9T1HU7sXhP2a3iO8+WoAfIxyO+A3FUrcr7kt+8kXh4qsh9olXTr3crhGIiAL/PsB69Oc844rR128NjpTzgRHDKP3oyvJqi/ilEDE6d Crossing the Generics Divide https://jimmybogard.com/crossing-the-generics-divide/ Jimmy Bogard urn:uuid:2bc74104-97e6-2759-08e1-739a0cb279ed Wed, 03 Mar 2021 14:54:21 +0000 <p>Generics are great, until they aren't, and when they aren't is when you don't know the type at <em>compile-time</em> but at <em>runtime</em>. This isn't necessarily a bad thing, and isn't necessarily a design problem. Remember, <code>void Main</code> is not generic, so at some point, your program needs to cross the</p> <p>Generics are great, until they aren't, and when they aren't is when you don't know the type at <em>compile-time</em> but at <em>runtime</em>. This isn't necessarily a bad thing, and isn't necessarily a design problem. Remember, <code>void Main</code> is not generic, so at some point, your program needs to cross the generic divide. Sometimes, this is explicit (you instantiate an object of a closed generic type) or implicit (you use a DI container to inject a closed generic type). <a href="https://jeremydmiller.com/2020/07/27/calling-generic-methods-from-non-generic-code-in-net/">Jeremy Miller blogged about this as well</a>, and I've seen this scenario come up on many occasions.</p><p>The strategies for calling generic code from non-generic code depend a little bit on the nature of the generic code in question, and depends on how "dynamic" you want the calling to be.</p><p>In a recent project, in the insurance domain, we had a customer applying for insurance, and they could apply for multiple policies. Each policy had a bit of common information, plus information specific to that policy. Some application processing was the same, some was different. We modeled our domain something like:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/3/21934_image.png" class="kg-image"><figcaption>Class diagram of inheritance of Policy classes</figcaption></figure><p>All is well and good until you need to do something specific with one of your <code>Policy</code> classes.</p><p>The simplest solution is to "hardcode" it, using something like pattern matching to do something specific with each derived type. Let's say we want to validate each policy in the application. We can loop through and pattern match on the type:</p><pre><code class="language-csharp">bool isValid = false; foreach (var policy in application.Policies) { isValid = isValid &amp;&amp; policy switch { HomePolicy home =&gt; Validate(home), LifePolicy life =&gt; Validate(life), AutoPolicy auto =&gt; Validate(auto), }; } </code></pre><p>Each <code>Validate</code> method is simply an overload with a different typed parameter:</p><pre><code class="language-csharp">private static bool Validate(AutoPolicy auto) { } private static bool Validate(HomePolicy home) { } private static bool Validate(LifePolicy life) { } </code></pre><p>This method works <em>OK</em> as long as the behavior in those methods is relatively simple and you don't have many conflicting dependencies.</p><p>Unfortunately, once you try to extract these methods into classes, perhaps each with their own dependencies, is when things get hairy.</p><h3 id="introducing-the-generic-type">Introducing the generic type</h3><p>When we extract those generic methods into generic classes, we wind up creating some common interface:</p><pre><code class="language-csharp">public interface IPolicyValidator&lt;TPolicy&gt; where TPolicy : IPolicy { bool Validate(TPolicy policy); }</code></pre><p>Then we'll have derived types for our different policies:</p><pre><code class="language-csharp">public class LifePolicyValidator : IPolicyValidator&lt;LifePolicy&gt; { public bool Validate(LifePolicy policy) { // Validate LifePolicy somehow </code></pre><p>Now our logic for individual derived types is neatly encapsulated behind these classes, but we're still able to use the derived type because the incoming parameter has closed to the generic parameter, giving us <code>LifePolicy</code> instead of <code>IPolicy</code>. Everything's great right? Not quite! Let's go back to the calling location:</p><pre><code class="language-csharp">bool isValid = false; foreach (var policy in application.Policies) { var policyValidator = container.GetService&lt;IPolicyValidator&lt;??&gt;&gt;(); isValid = policyValidator.Validate(policy); } </code></pre><p>Instead of instantiating our derived validators, we use a container because our validators might have dependencies they use and we don't want to change our application validation code just because a single policy validator needs to do more stuff. Our problem here is we need to "know" the service type at compile time in order to pass that value through to the <code>IPolicyValidator&lt;TPolicy&gt;</code> open generic type.</p><p>We <em>could</em> go back to pattern matching:</p><pre><code class="language-csharp">bool isValid = false; foreach (var policy in application.Policies) { switch (policy) { case AutoPolicy auto: var autoPolicyValidator = container.GetService&lt;IPolicyValidator&lt;AutoPolicy&gt;&gt;(); isValid = isValid &amp;&amp; autoPolicyValidator.Validate(auto); break; case HomePolicy home: var homePolicyValidator = container.GetService&lt;IPolicyValidator&lt;HomePolicy&gt;&gt;(); isValid = isValid &amp;&amp; homePolicyValidator.Validate(home); break; case LifePolicy life: var lifePolicyValidator = container.GetService&lt;IPolicyValidator&lt;LifePolicy&gt;&gt;(); isValid = isValid &amp;&amp; lifePolicyValidator.Validate(life); break; } } </code></pre><p>Yeesh, that's not pretty. It's all compile-safe, but lots of duplication. Not really an improvement!</p><p>What we'd like to do is allow our non-generic code call our generic code, but in a compile-safe way.</p><p>There's two major ways of doing this in OO, using inheritance, or composition. Let's look at the inheritance way first.</p><h3 id="wrapping-generics-using-inheritance">Wrapping generics using inheritance</h3><p>One way we can get around this is creating a base type that isn't generic, and that will be the signature our calling class calls:</p><pre><code class="language-csharp">public interface IPolicyValidator { bool Validate(IPolicy policy); } </code></pre><p>That's something our application code can work with. Next, we need to bridge the gap between this non-generic type, and our generic ones. For that, we'll create a type that implements <em>both</em> interfaces - the generic, and non-generic one:</p><pre><code class="language-csharp">public abstract class PolicyValidator&lt;TPolicy&gt; : IPolicyValidator, IPolicyValidator&lt;TPolicy&gt; where TPolicy : IPolicy { public bool Validate(IPolicy policy) =&gt; Validate((TPolicy) policy); public abstract bool Validate(TPolicy policy); } </code></pre><p>Now our trick here is the non-generic implementation delegates to the generic version, but casting the parameter to the correct generic type <code>TPolicy</code>. The generic method is now abstract, and our policy validator implementations now inherit this class:</p><pre><code class="language-csharp">public class LifePolicyValidator : PolicyValidator&lt;LifePolicy&gt; { public override bool Validate(LifePolicy policy) { </code></pre><p>We now <code>override</code> that abstract class instead of implementing it directly. Back in our calling code, we need to now work with this non-generic type. However, we need to get the <em>correct</em> implementation from our container. For that, we can inspect the runtime type to ask for the correct service type from the container:</p><pre><code class="language-csharp">bool isValid = false; foreach (var policy in application.Policies) { var policyType = policy.GetType(); var validatorType = typeof(IPolicyValidator&lt;&gt;).MakeGenericType(policyType); var policyValidator = (IPolicyValidator) container.GetService(validatorType); isValid = isValid &amp;&amp; policyValidator.Validate(policy); } </code></pre><p>We ask the container for the correct validator type <code>IPolicyValidator&lt;Whatever&gt;</code> where we fill in the generic parameters at runtime. We ask container for this service, casting the <code>IPolicyValidator&lt;Whatever&gt;</code> result into the non-generic type <code>IPolicyValidator</code> because we """"know"""" that the service type <code>PolicyValidator&lt;TPolicy&gt;</code> actually implements both <code>IPolicyValidator</code> and <code>IPolicyValidator&lt;TPolicy&gt;</code>.</p><p>This works great, but if and only if our <code>LifePolicyValidator</code> inherits from this bridge type. Not great, but the code is relatively straightforward to understand. With the inheritance, we might be able to extract common logic into the base class as well.</p><p>However, forcing an inheritance hierarchy isn't ideal in many scenarios, so let's now look at a composition approach.</p><h3 id="wrapping-generics-using-composition">Wrapping generics using composition</h3><p>With composition, we'll still have our common, non-generic interface that our calling code uses:</p><pre><code class="language-csharp">public interface IPolicyValidator { bool Validate(IPolicy policy); } </code></pre><p>We'll still need a class bridging between non-generic and generic, but this time, we'll have our bridge class compose the generic implementation instead of implementing/inheriting it:</p><pre><code class="language-csharp">public class PolicyValidator&lt;TPolicy&gt; : IPolicyValidator where TPolicy : IPolicy { private readonly IPolicyValidator&lt;TPolicy&gt; _inner; public PolicyValidator(IPolicyValidator&lt;TPolicy&gt; inner) =&gt; _inner = inner; public bool Validate(IPolicy policy) =&gt; _inner.Validate((TPolicy) policy); } </code></pre><p>Our generic bridge class implements the non-generic interface. Its implementation of the non-generic method forwards our non-generic <code>IPolicy</code>-based method to a generic one and cast along the way, but this time, our generic <code>IPolicyValidator&lt;TPolicy&gt;</code> is injected, composed into our bridge class. From our calling class, we'll still need to do a bit of manual closing of types:</p><pre><code class="language-csharp">bool isValid = false; foreach (var policy in application.Policies) { var policyType = policy.GetType(); var validatorType = typeof(PolicyValidator&lt;&gt;).MakeGenericType(policyType); var policyValidator = (IPolicyValidator) container.GetService(validatorType); isValid = isValid &amp;&amp; policyValidator.Validate(policy); } </code></pre><p>This time, we ask the container for the closed bridge class type, which will get wired up correctly when we register the open type. The container will inspect the closed type and fill in the right services for any dependencies that close the type. Our cast to <code>IPolicyValidator</code> is much safer now because we control that bridge type and no longer care how the <code>IPolicyValidator&lt;TPolicy&gt;</code> implementations are created.</p><p>So which should you go with? I still prefer the composition route since that still allows me to use inheritance in the implementations when I want, but only when the behavior/logic in my generic types needs it. Both are great to keep in your back pocket to be able to have your generic cake and your non-generic calling code eat it too.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=cPSKscyLKvM:V8PJRZ7bia0:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=cPSKscyLKvM:V8PJRZ7bia0:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=cPSKscyLKvM:V8PJRZ7bia0:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=cPSKscyLKvM:V8PJRZ7bia0:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=cPSKscyLKvM:V8PJRZ7bia0:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/cPSKscyLKvM" height="1" width="1" alt=""/> OpenTelemetry 1.0 Extensions Released https://jimmybogard.com/opentelemetry-1-0-extensions-released/ Jimmy Bogard urn:uuid:07f5252e-749e-190b-4a61-4e4a15af55be Wed, 24 Feb 2021 15:27:03 +0000 <p>With the release of <a href="https://medium.com/opentelemetry/opentelemetry-specification-v1-0-0-tracing-edition-72dd08936978">OpenTelemetry tracing specification reaching 1.0</a>, and the subsequent release of the <a href="https://github.com/open-telemetry/opentelemetry-dotnet/releases/tag/core-1.0.1">1.0 release of the core components of .NET</a>, I've pushed updates to my OpenTelemetry packages for:</p><ul><li><a href="https://www.nuget.org/packages/NServiceBus.Extensions.Diagnostics.OpenTelemetry/">NServiceBus.Extensions.Diagnostics.OpenTelemetry</a></li><li><a href="https://www.nuget.org/packages/MongoDB.Driver.Core.Extensions.OpenTelemetry/">MongoDB.Driver.Core.Extensions.OpenTelemetry</a></li></ul><p>While those packages didn't really change much, one</p> <p>With the release of <a href="https://medium.com/opentelemetry/opentelemetry-specification-v1-0-0-tracing-edition-72dd08936978">OpenTelemetry tracing specification reaching 1.0</a>, and the subsequent release of the <a href="https://github.com/open-telemetry/opentelemetry-dotnet/releases/tag/core-1.0.1">1.0 release of the core components of .NET</a>, I've pushed updates to my OpenTelemetry packages for:</p><ul><li><a href="https://www.nuget.org/packages/NServiceBus.Extensions.Diagnostics.OpenTelemetry/">NServiceBus.Extensions.Diagnostics.OpenTelemetry</a></li><li><a href="https://www.nuget.org/packages/MongoDB.Driver.Core.Extensions.OpenTelemetry/">MongoDB.Driver.Core.Extensions.OpenTelemetry</a></li></ul><p>While those packages didn't really change much, one thing that did was the introduction of the <a href="https://github.com/open-telemetry/opentelemetry-dotnet/blob/main/src/OpenTelemetry/README.md#resource">ResourceBuilder API</a>, which lets me put in tags for <em>all</em> spans in a given system. Most likely, you'll use that API to specify the name of the resource:</p><pre><code class="language-csharp">services.AddOpenTelemetryTracing(builder =&gt; builder .SetResourceBuilder(ResourceBuilder .CreateDefault() .AddService(Program.EndpointName)) .AddAspNetCoreInstrumentation() .AddSqlClientInstrumentation(opt =&gt; opt.SetDbStatementForText = true) .AddNServiceBusInstrumentation() .AddZipkinExporter(o =&gt; { o.Endpoint = new Uri("http://localhost:9411/api/v2/spans"); }) ); </code></pre><p>This ensures the resource name shows up in our traces:</p><figure class="kg-card kg-image-card"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2021/2/241524_image.png" class="kg-image"></figure><p>You can of course set other resource-specific attributes, but that replaces in some collectors setting the resource name in the Zipkin collector.</p><p>I've updated my tracing examples (<a href="https://github.com/jbogard/nsb-diagnostics-poc">Hello World</a> and <a href="https://github.com/jbogard/presentations/tree/master/DistributedTracing/Example">Microservice</a>) to 1.0 as well. Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=aBWIuOMKiDM:glCFXP-0McE:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=aBWIuOMKiDM:glCFXP-0McE:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=aBWIuOMKiDM:glCFXP-0McE:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=aBWIuOMKiDM:glCFXP-0McE:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=aBWIuOMKiDM:glCFXP-0McE:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/aBWIuOMKiDM" height="1" width="1" alt=""/> Choosing a ServiceLifetime https://jimmybogard.com/choosing-a-servicelifetime/ Jimmy Bogard urn:uuid:2c296372-8e89-9175-b8b5-a3116059835a Thu, 28 Jan 2021 14:37:00 +0000 <p>A subtle source of errors and less-than-subtle source of frustration is understanding and using service lifetimes appropriately with <a href="https://docs.microsoft.com/en-us/dotnet/core/extensions/dependency-injection">.NET Core dependency injection</a>. Service lifetimes, while complicated on the surface, can help developers that need to share state across different lifetimes of an application. Typically, a long-running application has three service</p> <p>A subtle source of errors and less-than-subtle source of frustration is understanding and using service lifetimes appropriately with <a href="https://docs.microsoft.com/en-us/dotnet/core/extensions/dependency-injection">.NET Core dependency injection</a>. Service lifetimes, while complicated on the surface, can help developers that need to share state across different lifetimes of an application. Typically, a long-running application has three service lifetimes that we want a service to "live" for or be shared:</p><ul><li>One instance shared for the lifetime of the application</li><li>One instance shared for some request/action/activity/unit of work</li><li>I don't care/it doesn't matter/why are you asking me</li></ul><p>Without dependency injection, we'd typically attack these by:</p><ul><li>Create one static/shared instance</li><li>Create one instance then pass it through to everything in the request/action/activity/unit of work</li><li>Just <code>new</code> it up whenever</li></ul><p>If I'm using a framework that leverages a dependency injection container, I'll need to instead conform to the container's rules about lifetimes and avoid trying to use the "old" ways. In the .NET Core container, these three <a href="https://docs.microsoft.com/en-us/dotnet/core/extensions/dependency-injection#service-lifetimes">service lifetimes</a> map to:</p><ul><li><code>ServiceLifetime.Singleton</code></li><li><code>ServiceLifetime.Scoped</code></li><li><code>ServiceLifetime.Transient</code></li></ul><p>Typically, your application won't actually create scopes itself, and instead that's done by some middleware you don't interact with (a good thing).</p><p>So why is it so easy to get wrong? In my experience fielding MANY questions on GitHub, it all comes down to <em>how</em> we should decide which lifetime to use. Most mistakes come from premature optimization, and I've been guilty of this as well.</p><h3 id="choosing-a-servicelifetime">Choosing a <code>ServiceLifetime</code></h3><p>Going back to the three typical lifetimes before dependency injection, the reason a service would <strong>need</strong> a different lifetime is quite simple: <strong>State</strong></p><p>For an injected service, my lifetime choice happens at registration time, when I'm connecting the service to its implementation. It's the <strong>implementation's state</strong> that determines the lifetime. When I refer to state, I don't mean merely the fields on the class - as the fields could be other services. I refer to "State" as "data" or information, and that scope in which that data or information needs to be shared determines the <code>ServiceLifetime</code>:</p><!--kg-card-begin: markdown--><table> <thead> <tr> <th>Scope to share state</th> <th><code>ServiceLifetime</code></th> </tr> </thead> <tbody> <tr> <td>Application</td> <td><code>Singleton</code></td> </tr> <tr> <td>Request/action/activity</td> <td><code>Scoped</code></td> </tr> <tr> <td>Stateless or should not share state</td> <td><code>Transient</code></td> </tr> </tbody> </table> <!--kg-card-end: markdown--><p>The safe default for stateless services is <code>Transient</code>. <em>If</em> my service has state <em>then</em> I should look at other scopes, and make the service lifetime decision based on the scope which my state should shared.</p><h3 id="how-not-to-choose-a-servicelifetime">How not to choose a <code>ServiceLifetime</code></h3><p>Some common ways that can mess people up:</p><h4 id="my-object-is-stateless-it-s-a-waste-to-create-objects-more-than-once-">My object is stateless, it's a waste to create objects more than once!</h4><p>Tempting, but premature optimization. Subtle errors crop up if your object is stateless <em>but</em> takes in dependencies. If you have a stateless object with no dependencies, consider avoiding DI altogether as it's a stable dependency, and just <code>new</code> it up.</p><h4 id="ok-but-can-i-at-least-make-it-scoped-it-s-a-waste-">OK but can I at least make it <code>Scoped</code>? It's a waste!</h4><p>No, you shouldn't. Your service registration also serves as documentation. When the next developer sees some implementation registered as <code>Scoped</code>, the correct assumption is that the implementation has some state that should be shared for a single scope/request/action. If we poke inside and see none, it's confusing. Premature optimization is confusing, just don't.</p><h4 id="it-s-a-performance-problem-all-this-waste-for-stateless-services-">It's a performance problem, all this waste for stateless services!</h4><p>Great, prove it! Use a profiler, show that your service instantiation is causing performance issues with GC/memory.</p><p>And now that you've got the proof (that would be the 0.001% of you), then, and only then, should you choose a different service lifetime for stateless services for explicit performance reasons.</p><p><strong>Choose your service lifetime based on the intended scope of the implementation's state.</strong></p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=ltrclayNb00:ReeJwF92Cf8:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=ltrclayNb00:ReeJwF92Cf8:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=ltrclayNb00:ReeJwF92Cf8:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=ltrclayNb00:ReeJwF92Cf8:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=ltrclayNb00:ReeJwF92Cf8:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/ltrclayNb00" height="1" width="1" alt=""/> A Lap Around ActivitySource and ActivityListener in .NET 5 https://jimmybogard.com/activitysource-and-listener-in-net-5/ Jimmy Bogard urn:uuid:9ebdcfc3-1c4e-449e-1eee-73b3be82af2e Mon, 04 Jan 2021 20:40:16 +0000 <p>Part of the new DiagnosticSource API are new ways of "listening" in to activities with the addition of the <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activitysource?view=net-5.0">ActivitySource</a></code> and <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activitylistener?view=net-5.0">ActivityListener</a></code> APIs. These are intended to replace the <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.diagnosticsource?view=net-5.0">DiagnosticSource</a></code> and <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.diagnosticlistener?view=net-5.0">DiagnosticListener</a></code> APIs. However, the latter two types aren't deprecated, and aren't being removed from the existing usages. However, ActivitySource/</p> <p>Part of the new DiagnosticSource API are new ways of "listening" in to activities with the addition of the <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activitysource?view=net-5.0">ActivitySource</a></code> and <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activitylistener?view=net-5.0">ActivityListener</a></code> APIs. These are intended to replace the <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.diagnosticsource?view=net-5.0">DiagnosticSource</a></code> and <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.diagnosticlistener?view=net-5.0">DiagnosticListener</a></code> APIs. However, the latter two types aren't deprecated, and aren't being removed from the existing usages. However, ActivitySource/Listener represent a pretty big leap forward in <a href="https://github.com/dotnet/designs/blob/main/accepted/2020/diagnostics/activity-improvements.md#rationale-and-use-cases">usability and performance</a> over the old APIs.</p><p>I've <a href="https://jimmybogard.com/building-end-to-end-diagnostics-activitysource-and-open/">shown a little bit</a> on the <code>ActivitySource</code> side of things, but there's the other side - listening to activity events. This is where the new API starts to show its benefits. The previous <code>DiagnosticListener</code> API used nested observables without much insight other than the events. You had one "callback" for when a new <code>DiagnosticSource</code> was available, then another for individual events:</p><pre><code class="language-csharp">using var sub = DiagnosticListener.AllListeners.Subscribe(listener =&gt; { Console.WriteLine($"Listener name {listener.Name}"); listener.Subscribe(kvp =&gt; Console.WriteLine($"Received event {kvp.Key}:{kvp.Value}")); }); </code></pre><p>The events didn't have any semantic meaning. The "key" for the event was whatever you wanted. The "value" was <code>object</code>, and could again be anything. Some events used semantic names for "Start" and "Stop" events, some did not. Some events had some context object passed in through the <code>Value</code>, some were anonymous types that required reflection (gross!) to get values out.</p><p>In the new world, the API is separated into the individual concerns when instrumenting:</p><ul><li>Should I listen to these activities?</li><li>Should I sample <em>this</em> activity?</li><li>Notify me when an activity starts.</li><li>Notify when an activity stops.</li></ul><p>With a <code>DiagnosticListener</code>, the <code>DiagnosticListener.AllListeners</code> property is an <code>IObservable&lt;DiagnosticListener&gt;</code>, from which you only have the <code>Name</code> to make a decision to listen to events.</p><p>The new API combines all of these into one single object:</p><pre><code class="language-csharp">public sealed class ActivityListener : IDisposable { public Action&lt;Activity&gt;? ActivityStarted { get; set; } public Action&lt;Activity&gt;? ActivityStopped { get; set; } public Func&lt;ActivitySource, bool&gt;? ShouldListenTo { get; set; } public SampleActivity&lt;string&gt;? SampleUsingParentId { get; set; } public SampleActivity&lt;ActivityContext&gt;? Sample { get; set; } public void Dispose() =&gt; ActivitySource.DetachListener(this); }</code></pre><p>The <code>SampleActivity</code> type is a delegate:</p><pre><code class="language-csharp">public delegate ActivitySamplingResult SampleActivity&lt;T&gt;(ref ActivityCreationOptions&lt;T&gt; options);</code></pre><p>This means we can have different sampling decisions based on how much detail we have. On <code>Dispose</code>, something nice, our <code>ActivityListener</code> automatically disconnects from the <code>ActivitySource</code>.</p><p>Attaching an <code>ActivityListener</code> is similar to <code>DiagnosticListener</code>, except now we simply call a method to add our listener:</p><pre><code class="language-csharp">ActivitySource.AddActivityListener(listener);</code></pre><p>We now get explicit callbacks for Start/Stop, as well as for Sampling. Instrumentation libraries will use the <code>Sample</code> property to decide the level of sampling for this activity - some of which will eventually make it to trace context headers.</p><p>In my typical use case, I'm using sampling to do integration testing, so I'll typically have my listener set to <code>ActivitySamplingResult.AllData</code>. With the <code>DiagnosticListener</code>, there were no different levels of listening. Someone was either listening, or not, with no levels in between.</p><p>Next, we get distinct <code>ActivityStarted</code> and <code>ActivityStopped</code> callbacks, where we get the entire <code>Activity</code> instead of a <code>KeyValuePair&lt;string, object?&gt;</code>, much richer and focused API:</p><pre><code class="language-csharp">using var listener = new ActivityListener { ShouldListenTo = _ =&gt; true, Sample = (ref ActivityCreationOptions&lt;ActivityContext&gt; _) =&gt; ActivitySamplingResult.AllData, ActivityStarted = activity =&gt; Console.WriteLine($"{activity.ParentId}:{activity.Id} - Start"), ActivityStopped = activity =&gt; Console.WriteLine($"{activity.ParentId}:{activity.Id} - Stop") }; ActivitySource.AddActivityListener(listener); </code></pre><p>One thing we don't have now is callbacks for generic events as we could do with the <code>DiagnosticListener</code> from before. It turns out all this information is now on our <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activity?view=net-5.0">Activity</a></code>:</p><ul><li>Links</li><li>Tags</li><li>Events</li><li>Baggage</li></ul><p>Rather than having a mystery meat object of the <code>KeyValuePair</code> from before, our <code>Activity</code> now has much more information (thanks to the OpenTelemetry folks for <a href="https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/api.md#span">standardizing this information</a>).</p><p>But what about existing users of <code>DiagnosticListener</code>? Well it turns out we can still get access to <code>Activity</code> start/stop events because <code>Activity</code> has a default <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activity.source?view=net-5.0">Source</a></code> that gets initialized with a blank <code>Name</code>. Although <code>DiagnosticListener</code> will not "forward" events to an <code>ActivitySource</code>, it's certainly possible.</p><p>While most folks will never really need to touch the Diagnostics API, the improvements with .NET 5 and following OpenTelemetry concepts mean the future is bright for telemetry and instrumentation - even if you only turn AppInsights on and forget about it.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=dADL6qz-sRE:_hYyyBIUZpQ:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=dADL6qz-sRE:_hYyyBIUZpQ:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=dADL6qz-sRE:_hYyyBIUZpQ:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=dADL6qz-sRE:_hYyyBIUZpQ:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=dADL6qz-sRE:_hYyyBIUZpQ:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/dADL6qz-sRE" height="1" width="1" alt=""/> Increasing Trace Cardinality with Activity Tags and Baggage https://jimmybogard.com/increasing-trace-cardinality-with-tags-and-baggage/ Jimmy Bogard urn:uuid:1eb9fb34-fdad-774b-8cf7-f88e96ba8f51 Wed, 16 Dec 2020 21:03:17 +0000 <p>One of the first "oh no" moments for folks new to distributed tracing is the needle in the haystack problem. Someone reports an error, you go look for traces in your tool (Azure Monitor or whatever), and because there are thousands of traces, you can't easily figure out which is</p> <p>One of the first "oh no" moments for folks new to distributed tracing is the needle in the haystack problem. Someone reports an error, you go look for traces in your tool (Azure Monitor or whatever), and because there are thousands of traces, you can't easily figure out which is <em>your</em> trace that you want to find. Fundamentally, this is the challenge of cardinality. We want a high enough cardinality of data to be able to effectively find our individual trace when searching.</p><p>OpenTelemetry and the Activity API give us two main ways to add additional information to spans/traces:</p><ul><li><a href="https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/common/common.md#attributes">Attributes</a> (tags)</li><li><a href="https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#baggage">Baggage</a></li></ul><p>In the Activity API, attributes are "Tags" and Baggage is still "Baggage", since these names predate OpenTelemetry.</p><p>The key difference between these two are propagation. Both are key-value pairs of information, but only Baggage propagates to subsequent traces. This means that intraprocess communication needs a means to propagate - and that's exactly what the <a href="https://www.w3.org/TR/baggage/">W3C Baggage standard</a> describes.</p><p>We do have to be careful about baggage, however, as it will accumulate. Anything added to baggage will show up in all child spans.</p><h3 id="tags-and-baggage-in-activities">Tags and Baggage in Activities</h3><p>With the activity API, it's quite straightforward to add tags and baggage to the current activity. Suppose we want to include some operation ID as part of our trace with a tag:</p><pre><code class="language-csharp">[HttpGet] public async Task&lt;ActionResult&lt;Guid&gt;&gt; Get(string message) { var command = new SaySomething { Message = message, Id = Guid.NewGuid() }; Activity.Current?.AddTag("cart.operation.id", command.Id.ToString()); </code></pre><p>The <code>AddTag</code> and <code>AddBaggage</code> methods let us add these key/value pairs of data. If we're using OpenTelemetry, our tags will automatically show up in a trace:</p><figure class="kg-card kg-image-card"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/12/161937_image.png" class="kg-image" alt="Custom tag showing up in Zipkin trace"></figure><p>This tag can be searched on, making sure we only find the trace we're interested in:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/12/161939_image.png" class="kg-image"><figcaption>Searching tag value in Zipkin</figcaption></figure><p>If you're already using logging contexts, this is probably quite familiar to you.</p><p>While tags are great for including additional information about a single span, they don't propagate, so you might have a harder time correlating multiple sets of data together, such as "find all queries executed for cart 123". You might only know the cart at one specific span, but not all the way down in all the related activities.</p><p>For that, we can use Baggage:</p><pre><code class="language-csharp">[HttpGet] public async Task&lt;ActionResult&lt;Guid&gt;&gt; Get(string message) { var command = new SaySomething { Message = message, Id = Guid.NewGuid() }; Activity.Current?.AddBaggage("cart.operation.id", command.Id.ToString()); </code></pre><p>For both interprocess and intraprocess Activities, the <code>Baggage</code> will propagate. Here we can see baggage propagate through the headers in RabbitMQ:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/12/16201_image.png" class="kg-image"><figcaption>Baggage and trace context in headers in RabbitMQ message</figcaption></figure><p>Unfortunately, baggage won't automatically show up in our traces, we'll have to do something special to get those to show up.</p><h3 id="reporting-baggage-in-telemetry-data">Reporting baggage in telemetry data</h3><p>While this context information gets passed through all of our spans, it won't necessarily show up in our tracing tools. This is because attributes are the primary reporting mechanism for information for traces, and baggage is a larger concept that can be used by logs, traces, and metrics to enrich each of those. It might seem counterintuitive that baggage is not automatically reported, but it's simply because it's a broader concept intended to be consumed by other observability pillars.</p><p>For us, if we want to just automatically include all baggage in tags as they are recorded, we can do so by registering a simple <code>ActivityListener</code> at startup:</p><pre><code class="language-csharp">public static void Main(string[] args) { var listener = new ActivityListener { ShouldListenTo = _ =&gt; true, ActivityStopped = activity =&gt; { foreach (var (key, value) in activity.Baggage) { activity.AddTag(key, value); } } }; ActivitySource.AddActivityListener(listener); CreateHostBuilder(args).Build().Run(); } </code></pre><p>With this in each of my applications, I can ensure that all baggage gets shipped as tags out to my tracing system:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/12/162015_image.png" class="kg-image"><figcaption>Baggage showing up in span for database interaction</figcaption></figure><p>Above, I can see my baggage I set way up in a <code>Controller</code> made it all the way down to a MongoDB call. It's crossed several process boundaries to get there:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/12/162017_image.png" class="kg-image"><figcaption>Trace highlighted with span far from baggage origination</figcaption></figure><p>Typically, we'll also include this baggage context in our structured logs with Serilog:</p><pre><code class="language-csharp">class BaggageEnricher : ILogEventEnricher { public void Enrich(LogEvent logEvent, ILogEventPropertyFactory propertyFactory) { if (Activity.Current == null) return; foreach (var (key, value) in Activity.Current.Baggage) { logEvent.AddPropertyIfAbsent(propertyFactory.CreateProperty(key, value)); } } } </code></pre><p>With baggage, we get a bit of overhead since it will piggyback everywhere. However, we can start to leverage our baggage to include contextual information that we find valuable in logs, traces, and metrics. Common data might be:</p><ul><li>Business identifiers (cart ID etc)</li><li>Workflow/batch identifiers</li><li>Session identifiers</li><li>Machine information</li><li>User information</li></ul><p>You'll have to, of course, follow privacy laws for some of this information (maybe don't log tax numbers), so care is needed here. In practice, it's been invaluable to take some business information (say, a cart ID), and see <em>all</em> traces related to it, not just information only available in a single trace.</p><p>By including high cardinality data in our logs and traces, we can far more quickly locate what we're looking for, instead of resorting to small time windows as I've had to in the past.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Aso3dVtpXPA:qkqAycgUN_Y:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Aso3dVtpXPA:qkqAycgUN_Y:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Aso3dVtpXPA:qkqAycgUN_Y:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Aso3dVtpXPA:qkqAycgUN_Y:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Aso3dVtpXPA:qkqAycgUN_Y:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/Aso3dVtpXPA" height="1" width="1" alt=""/> .Net Project Builds with Node Package Manager https://lostechies.com/derekgreer/2020/12/10/dotnet-project-builds-with-npm/ Los Techies urn:uuid:658daf41-1dc4-075d-278e-3e9da2891e09 Thu, 10 Dec 2020 07:00:00 +0000 A few years ago, I wrote an article entitled Separation of Concerns: Application Builds &amp; Continuous Integration wherein I discussed the benefits of separating project builds from CI/CD concerns by creating a local build script which lives with your project. Not long after writing that article, I was turned on to what I’ve come to believe is one of the easiest tools I’ve encountered for managing .Net project builds thus far: npm. <p>A few years ago, I wrote an article entitled <a href="http://aspiringcraftsman.com/2016/02/28/separation-of-concerns-application-builds-continuous-integration/">Separation of Concerns: Application Builds &amp; Continuous Integration</a> wherein I discussed the benefits of separating project builds from CI/CD concerns by creating a local build script which lives with your project. Not long after writing that article, I was turned on to what I’ve come to believe is one of the easiest tools I’ve encountered for managing .Net project builds thus far: npm.</p> <p>Most development platforms provide a native task-based build technology. Microsoft’s tooling for these needs is MSBuild: a command-line tool whose build files double as Visual Studio’s project and solution definition files. I used MSBuild briefly for scripting custom build concerns for a couple of years, but found it to be awkward and cumbersome. Around 2007, I abandoned use of MSBuild for creating builds and began using Rake. While it had the downside of requiring a bit of knowledge of Ruby, it was a popular choice among those willing to look outside of the Microsoft camp for tooling and had community support for working with .Net builds through the <a href="https://www.codemag.com/article/1006101/Building-.NET-Systems-with-Ruby-Rake-and-Albacore">Albacore</a> library. I’ve used a few different technologies since, but about 5 years ago I saw a demonstration of the use of npm for building .Net projects at a conference and I was immediately sold. When used well, it really is the easiest and most terse way to script a custom build for the .Net platform I’ve encountered.</p> <p>“So what’s special about npm?” you might ask. The primary appeal of using npm for building applications is that it’s easy to use. Essentially, it’s just an orchestration of shell commands.</p> <h3 id="tasks">Tasks</h3> <p>With other build tools, you’re often required to know a specific language in addition to learning special constructs peculiar to the build tool to create build tasks. In contrast, npm’s expected package.json file simply defines an array of shell command scripts:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Clean the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"restore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Restore dependencies."</span><span class="p">,</span><span class="w"> </span><span class="nl">"compile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Compile the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"test"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Run the tests."</span><span class="p">,</span><span class="w"> </span><span class="nl">"dist"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Create a distribution."</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Some author"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>As with other build tools, NPM provides the ability to define dependencies between build tasks. This is done using pre- and post- lifecycle scripts. Simply, any task issued by NPM will first execute a script by the same name with a prefix of “pre” when present and will subsequently execute a script by the same name with a prefix of “post” when present. For example:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Clean the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"prerestore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run clean"</span><span class="p">,</span><span class="w"> </span><span class="nl">"restore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Restore dependencies."</span><span class="p">,</span><span class="w"> </span><span class="nl">"precompile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run restore"</span><span class="p">,</span><span class="w"> </span><span class="nl">"compile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Compile the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"pretest"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run compile"</span><span class="p">,</span><span class="w"> </span><span class="nl">"test"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Run the tests."</span><span class="p">,</span><span class="w"> </span><span class="nl">"prebuild"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run test"</span><span class="p">,</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Publish a distribution."</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Some author"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Based on the above package.json file, issuing “npm run build” will result in running the tasks of clean, restore, compile, test, and build in that order by virtue of each declaring an appropriate dependency.</p> <p>Given you’re okay with limiting a fully-specified dependency chain where a subset of the build can be initiated at any stage (e.g. running “npm run test” and triggering clean, restore, and compile first) , the above orchestration can be simplified by installing the npm-run-all node dependency and defining a single pre- lifetime script for the main build target:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Clean the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"restore"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Restore dependencies."</span><span class="p">,</span><span class="w"> </span><span class="nl">"compile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Compile the project."</span><span class="p">,</span><span class="w"> </span><span class="nl">"test"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Run the tests."</span><span class="p">,</span><span class="w"> </span><span class="nl">"prebuild"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm-run-all clean restore compile test"</span><span class="p">,</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"echo Publish a distribution."</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>In this example, issuing “npm run build” will result in the prebuild script executing npm-run-all with the parameters: clean, restore, compile and test which it will execute in the order listed.</p> <h3 id="variables">Variables</h3> <p>Aside from understanding how to utilize the pre- and post- lifecycle scripts to denote task dependencies, the only other thing you really need to know is how to work with variables.</p> <p>Node’s npm command facilitates the definition of variables by command-line parameters as well as declaring package variables. When npm executes, each of the properties declared within the package.json are flattened and prefixed with “npm_package_”. For example, the standard “version” property can be used as part of a dotnet build to denote a project version by referencing ${npm_package_version}:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet build ./src/*.sln /p:Version=${npm_package_version}"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Command-line parameters can also be passed to npm and are similarly prefixed with “npm_config_” with any dashes (“-”) replaced with underscores (“_”). For example, the previous version setting could be passed to dotnet.exe in the following version of package.json by issuing the below command:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>```npm run build --product-version=2.0.0``` </code></pre></div></div> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet build ./src/*.sln /p:Version=${npm_config_product_version}"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>(Note: the parameter –version is an npm parameter for printing the version of npm being executed and therefore can’t be used as a script parameter.)</p> <p>The only other important thing to understand about the use of variables with npm is that the method of dereferencing is dependent upon the shell used. When using npm on Windows, the default shell is cmd.exe. If using the default shell on Windows, the version parameter would need to be deference as %npm_config_product_version%:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"configuration"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Release"</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet build ./src/*.sln /p:Version=%npm_config_product_version%"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Until recently, I used a node package named “cross-env” which allows you to normalize how you dereference variables regardless of platform, but for several reasons including cross-env being placed in maintenance mode, the added dependency overhead, syntax noise, and support for advanced variable expansion cases such as default values, I’d recommend any cross-platform execution be supported by just standardizing on a single shell (e.g. “Bash”). With the introduction of Windows Subsystem for Linux and the virtual ubiquity of git for version control, most developer Windows systems already contain the bash shell. To configure npm to use bash at the project level, just create a file named .npmrc at the package root containing the following line:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>script-shell=bash </code></pre></div></div> <h3 id="using-node-packages">Using Node Packages</h3> <p>While not necessary, there are many CLI node packages that can be easily leveraged for aiding in authoring your builds. For example, a package named “rimraf”, which functions like Linux’s “rm -rf” command, is a utility you can use to implement a clean script for recursively deleting any temporary build folders created as part of previous builds. In the following package.json build, a package target builds a NuGet package which it outputs to a dist folder in the package root. The rimraf command is used to delete this temp folder as part of the build script’s dependencies:</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"example"</span><span class="p">,</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w"> </span><span class="nl">"scripts"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"clean"</span><span class="p">:</span><span class="w"> </span><span class="s2">"rimraf dist"</span><span class="p">,</span><span class="w"> </span><span class="nl">"prebuild"</span><span class="p">:</span><span class="w"> </span><span class="s2">"npm run clean"</span><span class="p">,</span><span class="w"> </span><span class="nl">"build"</span><span class="p">:</span><span class="w"> </span><span class="s2">"dotnet pack ./src/ExampleLibrary/ExampleLibrary.csproj -o dist /p:Version=${npm_package_version}"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"author"</span><span class="p">:</span><span class="w"> </span><span class="s2">"John Doe"</span><span class="p">,</span><span class="w"> </span><span class="nl">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ISC"</span><span class="p">,</span><span class="w"> </span><span class="nl">"devDependencies"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"npm-run-all"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^4.1.5"</span><span class="p">,</span><span class="w"> </span><span class="nl">"rimraf"</span><span class="p">:</span><span class="w"> </span><span class="s2">"^3.0.2"</span><span class="w"> </s Building End-to-End Diagnostics: ActivitySource and OpenTelemetry 1.0 https://jimmybogard.com/building-end-to-end-diagnostics-activitysource-and-open/ Jimmy Bogard urn:uuid:d4a0dbb2-32ba-9c2c-dc77-af86e6f2e47d Tue, 08 Dec 2020 20:01:16 +0000 <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-and-tracing-a-primer/">An Intro</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-and-tracing-a-primer-trace-context/">Trace Context</a></li><li><a href="https://jimmybogard.com/building-end-to-end-tracing-diagnostic-events/">Diagnostic Events</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-opentelemetry-integration/">OpenTelemetry Integration</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-activity-and-span-correlation/">Activity and Span Correlation</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-visualizations-with-exporters/">Visualization with Exporters</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-user-defined-context-with-correlation-context/">User-Defined Context with Correlation Context</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-activitysource-and-open/">ActivitySource and OpenTelemetry 1.0</a></li></ul><p>It's a few months, and quite a lot has changed in the tracing landscape in .NET. As OpenTelemetry marches towards 1.</p> <p>Posts in this series:</p><ul><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-and-tracing-a-primer/">An Intro</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-and-tracing-a-primer-trace-context/">Trace Context</a></li><li><a href="https://jimmybogard.com/building-end-to-end-tracing-diagnostic-events/">Diagnostic Events</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-opentelemetry-integration/">OpenTelemetry Integration</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-activity-and-span-correlation/">Activity and Span Correlation</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-visualizations-with-exporters/">Visualization with Exporters</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-user-defined-context-with-correlation-context/">User-Defined Context with Correlation Context</a></li><li><a href="https://jimmybogard.com/building-end-to-end-diagnostics-activitysource-and-open/">ActivitySource and OpenTelemetry 1.0</a></li></ul><p>It's a few months, and quite a lot has changed in the tracing landscape in .NET. As OpenTelemetry marches towards 1.0, the .NET team had to make a decision. Given there is a <em>lot</em> of code out there using the existing tracing/telemetry APIs in both the .NET codebase and various SDKs, how should .NET 5 best support the <a href="https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md">OpenTelemetry specification</a> for:</p><ul><li>Logging</li><li>Tracing</li><li>Metrics</li></ul><p>The tracing specification is solid at this point (but not yet "1.0"), and the tracing specification is not just the W3C standards on the wire, but the span APIs themselves.</p><p>Since the Activity API already exists, and it's already being used, it was<a href="https://github.com/open-telemetry/opentelemetry-dotnet/issues/684"> far easier to support a "Span" concept as part of an <code>Activity</code></a> and close the gaps in functionality.</p><p>The second major addition in the .NET 5 release was the addition of the <a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activitysource?view=net-5.0">ActivitySource</a>/<a href="https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.activitylistener?view=net-5.0">ActivityListener</a> APIs, which make it quite a bit simpler to raise and listen to events for Activity start/stop.</p><p>The overall plan was to convert from <code>DiagnosticListener</code> -based activity interaction to the new <code>ActivitySource</code> API. But before we do that, let's look at the current state of our packages.</p><h3 id="current-packages">Current Packages</h3><p>Before I made any changes, you would see the code that started the activity be fairly light, something like:</p><pre><code class="language-csharp">_diagnosticListener.OnActivityImport(activity, context); if (_diagnosticListener.IsEnabled(StartActivityName, context)) { _diagnosticListener.StartActivity(activity, context); } else { activity.Start(); }</code></pre><p> The code starting the Activity would need to do all the work to see if the diagnostic listener was enabled, and if it were, then use the listener to start the activity. You could then pass in a <code>context</code> object that could be...anything. The code then receiving the listener event would then do whatever it wanted, and in the case of OpenTelemetry, it would enrich the event with additional tags, create a <code>Span</code>, then record it.</p><p>That left my two packages as:</p><ul><li>Xyz.Extensions.Diagnostics - raise a diagnostic listener event</li><li>Xyz.Extensions.OpenTelemetry - listen to the diagnostic listener event, add tags, and enlist with OpenTelemetry infrastructure</li></ul><p>In the "new" world, we really want to treat our generated <code>Activity</code> as our first-class "Span", and not something that some other package needs to muck around with. This means a substantial change for our packages:</p><ul><li>Xyz.Extensions.Diagnostics - create <code>ActivitySource</code>, add tags, start/stop event through <code>ActivitySource</code></li><li>Xyz.Extensions.OpenTelemetry - ??? maybe nothing?</li></ul><p>I'll come back to the second package, but the big change is moving all the tag creation code from the OpenTelemetry package over to the Diagnostics one.</p><p>This is a <em>good</em> thing, because now we don't have to rely on some external package to add interesting telemetry information to our Activity, we've already added it!</p><h3 id="converting-to-activitysource">Converting to ActivitySource</h3><p>Instead of a <code>DiagnosticListener</code>, we'll want to use an <code>ActivitySource</code>, and for this part, I'll follow the guidance on this API from <a href="https://github.com/open-telemetry/opentelemetry-dotnet/blob/master/src/OpenTelemetry.Api/README.md#instrumenting-a-libraryapplication-with-net-activity-api">OpenTelemetry</a>. We create an <code>ActivitySource</code> that we'll share for all activities in our application:</p><pre><code class="language-csharp">internal static class NServiceBusActivitySource { private static readonly AssemblyName AssemblyName = typeof(NServiceBusActivitySource).Assembly.GetName(); internal static readonly ActivitySource ActivitySource = new (AssemblyName.Name, AssemblyName.Version.ToString()); } </code></pre><p>Per conventions, the activity source name should be the name of the assembly creating the activities. That makes it much easier to "discover" activities, you don't have to expose a constant or search through source code to discern the name.</p><p>In our main methods for dealing with the <code>Activity</code>, we can take advantage of <code>Activity</code> implementing <code>IDisposable</code> to automatically stop the activity:</p><pre><code class="language-csharp">public override async Task Invoke( IIncomingPhysicalMessageContext context, Func&lt;Task&gt; next) { using (StartActivity(context)) { await next().ConfigureAwait(false); if (_diagnosticListener.IsEnabled(EventName)) { _diagnosticListener.Write(EventName, context); } } } </code></pre><p>I've kept a <code>DiagnosticListener</code> here for backward compatibility purposes. The <code>StartActivity</code> method takes a lot of the header business we did earlier, with one slight variation - using the renamed <a href="https://www.w3.org/TR/baggage/">W3C  Baggage spec</a>:</p><pre><code class="language-csharp">if (context.MessageHeaders.TryGetValue(Headers.BaggageHeaderName, out var baggageValue) || context.MessageHeaders.TryGetValue(Headers.CorrelationContextHeaderName, out baggageValue)) { var baggage = baggageValue.Split(','); if (baggage.Length &gt; 0) { foreach (var item in baggage) { if (NameValueHeaderValue.TryParse(item, out var baggageItem)) { baggageItems.Add(new KeyValuePair&lt;string, string?&gt;(baggageItem.Name, HttpUtility.UrlDecode(baggageItem.Value))); } } } } </code></pre><p>Finally, starting our activity is a little different. We use the <code>ActivitySource</code> to start, passing along the <code>parentId</code> if we found it in the incoming headers:</p><pre><code class="language-csharp">var activity = parentId == null ? NServiceBusActivitySource.ActivitySource.StartActivity( ActivityNames.IncomingPhysicalMessage, ActivityKind.Consumer) : NServiceBusActivitySource.ActivitySource.StartActivity( ActivityNames.IncomingPhysicalMessage, ActivityKind.Consumer, parentId); if (activity == null) { return activity; } </code></pre><p>Now instead of checking the <code>diagnosticListener</code> to see if any one is listening, <code>StartActivity</code> returns null when there are no listeners. We can simply return.</p><p>If there <em>is</em> someone listening, we can enrich our <code>Activity</code> with data:</p><pre><code class="language-csharp">activity.TraceStateString = traceStateString; _activityEnricher.Enrich(activity, context); foreach (var baggageItem in baggageItems) { activity.AddBaggage(baggageItem.Key, baggageItem.Value); } return activity;</code></pre><p>I've encapsulated all of the context reading of headers and setting of tags in a separate <code>ActivityEnricher</code> object:</p><pre><code class="language-csharp">public void Enrich(Activity activity, IIncomingPhysicalMessageContext context) { var destinationName = _settings.LogicalAddress(); const string operationName = "process"; activity.DisplayName = $"{destinationName} {operationName}"; activity.AddTag("messaging.message_id", context.Message.MessageId); activity.AddTag("messaging.operation", operationName); activity.AddTag("messaging.destination", destinationName); activity.AddTag("messaging.message_payload_size_bytes", context.Message.Body.Length.ToString()); </code></pre><p>I've included everything I could match on the <a href="https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/trace/semantic_conventions/messaging.md">OpenTelemetry semantic conventions</a>.</p><p>There was one extra piece, however, which was configuring for "extra" information that I didn't want to turn on by default.</p><h3 id="configuring-span-attributes">Configuring Span Attributes</h3><p>In my original OpenTelemetry integration, I included a way to record the message contents as part of your trace. However, now that the <code>Activity</code> needs to include all the relevant information, I needed to have a way to configure the tags when it's actually started and stopped.</p><p>To do so, I integrated with the configuration of the extension point of this feature in NServiceBus. I first created a class to hold all of the configuration settings (just one for now):</p><pre><code class="language-csharp">public class InstrumentationOptions { public bool CaptureMessageBody { get; set; } }</code></pre><p>Next, I had my Activity enrichment class take the settings as a constructor argument:</p><pre><code class="language-csharp">internal class SettingsActivityEnricher : IActivityEnricher { private readonly ReadOnlySettings _settings; private readonly InstrumentationOptions _options; public SettingsActivityEnricher(ReadOnlySettings settings) { _settings = settings; _options = settings.Get&lt;InstrumentationOptions&gt;(); } </code></pre><p>Now when I enrich the <code>Activity</code>, I'll also look at these settings:</p><pre><code class="language-csharp">if (activity.IsAllDataRequested &amp;&amp; _options.CaptureMessageBody) { activity.AddTag("messaging.message_payload", Encoding.UTF8.GetString(context.Message.Body)); } </code></pre><p>Then when I register the NServiceBus behavior, I'll look for these settings:</p><pre><code class="language-csharp">public DiagnosticsFeature() { Defaults(settings =&gt; settings.SetDefault&lt;InstrumentationOptions&gt;(new InstrumentationOptions { CaptureMessageBody = false })); EnableByDefault(); } protected override void Setup(FeatureConfigurationContext context) { var activityEnricher = new SettingsActivityEnricher(context.Settings); context.Pipeline.Register(new IncomingPhysicalMessageDiagnostics(activityEnricher), "Parses incoming W3C trace information from incoming messages."); context.Pipeline.Register(new OutgoingPhysicalMessageDiagnostics(activityEnricher), "Appends W3C trace information to outgoing messages."); </code></pre><p>All this code is very specific to NServiceBus, so for any custom diagnostics, they'll need to plug in to whatever extensibility model that exists for the middleware they're building. For MongoDB for example, I had a direct constructor:</p><pre><code class="language-csharp">var clientSettings = MongoClientSettings.FromUrl(mongoUrl); var options = new InstrumentationOptions { CaptureCommandText = true }; clientSettings.ClusterConfigurator = cb =&gt; cb.Subscribe(new DiagnosticsActivityEventSubscriber(options)); var mongoClient = new MongoClient(clientSettings); </code></pre><p>With all this in place, now I just needed to integrate with OpenTelemetry (though this is really optional).</p><h3 id="opentelemetry-integration">OpenTelemetry Integration</h3><p>Now that OpenTelemetry and System.Diagnostics.DiagnosticSource are more aligned, registering and listening to activities in OpenTelemetry is as simple as registering a source:</p><pre><code class="language-csharp">services.AddOpenTelemetryTracing(builder =&gt; builder .AddAspNetCoreInstrumentation() .AddSqlClientInstrumentation(opt =&gt; opt.SetTextCommandContent = true) .AddSource("NServiceBus.Extensions.Diagnostics") .AddZipkinExporter(o =&gt; { o.Endpoint = new Uri("http://localhost:9411/api/v2/spans"); o.ServiceName = Program.EndpointName; }) </code></pre><p>Or, to make things a little easier, I created a tiny little package, the old <code><a href="https://www.nuget.org/packages/NServiceBus.Extensions.Diagnostics.OpenTelemetry/">NServiceBus.Extensions.Diagnostics.OpenTelemetry</a></code> one that wraps that one line:</p><pre><code class="language-csharp">public static class TracerProviderBuilderExtensions { public static TracerProviderBuilder AddNServiceBusInstrumentation(this TracerProviderBuilder builder) =&gt; builder.AddSource("NServiceBus.Extensions.Diagnostics"); } </code></pre><p>It's not required, but at this point, this is <em>all</em> we need to bridge <code>ActivitySource</code>-enabled telemetry with OpenTelemetry.</p><p>So that's all for now - OpenTelemetry marches to 1.0, and once it does, I'll update my <a href="https://github.com/jbogard/nsb-diagnostics-poc">end-to-end example</a> (it's on the RC now).</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CUprcXnjepA:qHQ7hD0Gksg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CUprcXnjepA:qHQ7hD0Gksg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=CUprcXnjepA:qHQ7hD0Gksg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CUprcXnjepA:qHQ7hD0Gksg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=CUprcXnjepA:qHQ7hD0Gksg:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/CUprcXnjepA" height="1" width="1" alt=""/> Conventional Options https://lostechies.com/derekgreer/2020/11/20/conventional-options/ Los Techies urn:uuid:bddf9f6a-a398-da37-4b68-84d04a57d922 Fri, 20 Nov 2020 07:00:00 +0000 I’ve really enjoyed working with the Microsoft Configuration libraries introduced with .Net Core approximately 5 years ago. The older XML-based API was quite a pain to work with, so the ConfigurationBuilder and associated types provided a long overdue need for the platform. <p>I’ve really enjoyed working with the Microsoft Configuration libraries introduced with .Net Core approximately 5 years ago. The older XML-based API was quite a pain to work with, so the ConfigurationBuilder and associated types provided a long overdue need for the platform.</p> <p>I had long since adopted a practice of creating discrete configuration classes populated and registered with a DI container over direct use of the ConfigurationManager class within components, so I was pleased to see the platform nudge developers in this direction through the introduction of the IOptions<T> type.</T></p> <p>A few aspects surrounded the prescribed use of the IOptions<T> type of which I wasn't particularly fond were needing to inject IOptions<T> rather than the actual options type, taking a dependency upon the Microsoft.Extensions.Options package from my library packages, and the cermony of binding the options to the IConfiguration instance. To address these concerns, I wrote some extension methods which took care of binding the type to my configuration by convention (i.e. binding a type with a suffix of Options to a section corresponding to the option type's prefix) and registering it with the container.</T></T></p> <p>I’ve recently released a new version of these extensions supporting several of the most popular containers as an open source library. You can find the project <a href="http://github.com/derekgreer/conventional-options">here</a>.</p> <p>The following are the steps for using these extensions:</p> <h3 id="step-1">Step 1</h3> <p>Install ConventionalOptions for the target DI container:</p> <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$&gt; nuget install ConventionalOptions.DependencyInjection </code></pre></div></div> <h3 id="step-2">Step 2</h3> <p>Add Microsoft’s Options feature and register option types:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="n">services</span><span class="p">.</span><span class="nf">AddOptions</span><span class="p">();</span> <span class="n">services</span><span class="p">.</span><span class="nf">RegisterOptionsFromAssemblies</span><span class="p">(</span><span class="n">Configuration</span><span class="p">,</span> <span class="n">Assembly</span><span class="p">.</span><span class="nf">GetExecutingAssembly</span><span class="p">());</span> </code></pre></div></div> <h3 id="step-3">Step 3</h3> <p>Create an Options class with the desired properties:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="k">public</span> <span class="k">class</span> <span class="nc">OrderServiceOptions</span> <span class="p">{</span> <span class="k">public</span> <span class="kt">string</span> <span class="n">StringProperty</span> <span class="p">{</span> <span class="k">get</span><span class="p">;</span> <span class="k">set</span><span class="p">;</span> <span class="p">}</span> <span class="k">public</span> <span class="kt">int</span> <span class="n">IntProperty</span> <span class="p">{</span> <span class="k">get</span><span class="p">;</span> <span class="k">set</span><span class="p">;</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <h3 id="step-4">Step 4</h3> <p>Provide a corresponding configuration section matching the prefix of the Options class (e.g. in appsettings.json):</p> <div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w"> </span><span class="nl">"OrderService"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"StringProperty"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Some value"</span><span class="p">,</span><span class="w"> </span><span class="nl">"IntProperty"</span><span class="p">:</span><span class="w"> </span><span class="mi">42</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <h3 id="step-5">Step 5</h3> <p>Inject the options into types resolved from the container:</p> <div class="language-csharp highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="k">public</span> <span class="k">class</span> <span class="nc">OrderService</span> <span class="p">{</span> <span class="k">public</span> <span class="nf">OrderService</span><span class="p">(</span><span class="n">OrderServiceOptions</span> <span class="n">options</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// ... use options</span> <span class="p">}</span> <span class="p">}</span> </code></pre></div></div> <p>Currently ConventionalOptions works with Microsoft’s DI Container, Autofac, Lamar, Ninject, and StructureMap.</p> <p>Enjoy!</p> Vertical Slice Example Updated to .NET 5 https://jimmybogard.com/vertical-slice-example-updated-to-net-5/ Jimmy Bogard urn:uuid:cac73d2f-ce70-1e57-8eff-5fd747af25f6 Thu, 19 Nov 2020 20:22:38 +0000 <p>With the <a href="https://devblogs.microsoft.com/dotnet/announcing-net-5-0/">release of .NET 5</a>, I wanted to update my <a href="https://github.com/jbogard/contosoUniversityDotNetCore-Pages">Vertical Slice code example</a> (Contoso University) to .NET 5, as well as leverage all the C# 9 goodness. The migration itself was <em>very</em> simple, but updating the dependencies along the way proved to be a bit more of a</p> <p>With the <a href="https://devblogs.microsoft.com/dotnet/announcing-net-5-0/">release of .NET 5</a>, I wanted to update my <a href="https://github.com/jbogard/contosoUniversityDotNetCore-Pages">Vertical Slice code example</a> (Contoso University) to .NET 5, as well as leverage all the C# 9 goodness. The migration itself was <em>very</em> simple, but updating the dependencies along the way proved to be a bit more of a challenge.</p><h3 id="updating-the-runtime-and-dependencies">Updating the runtime and dependencies</h3><p>The first step was migrating the target framework to .NET 5. This is about as simple as it gets:</p><pre><code class="language-diff">&lt;Project Sdk="Microsoft.NET.Sdk.Web"&gt; &lt;PropertyGroup&gt; - &lt;TargetFramework&gt;netcoreapp3.1&lt;/TargetFramework&gt; + &lt;TargetFramework&gt;net5.0&lt;/TargetFramework&gt; &lt;/PropertyGroup&gt; &lt;/Project&gt;</code></pre><p>Next, I needed to update the dependencies. This application hadn't had its dependencies for a few months, so most of the changes were around that. Otherwise, any Microsoft.* dependency updated to a 5.0 version:</p><pre><code class="language-diff"> &lt;ItemGroup&gt; &lt;PackageReference Include="AutoMapper" Version="10.1.1" /&gt; &lt;PackageReference Include="AutoMapper.Extensions.Microsoft.DependencyInjection" Version="8.1.0" /&gt; - &lt;PackageReference Include="DelegateDecompiler.EntityFrameworkCore" Version="0.28.0" /&gt; - &lt;PackageReference Include="FluentValidation.AspNetCore" Version="9.2.0" /&gt; - &lt;PackageReference Include="HtmlTags" Version="8.0.0" /&gt; + &lt;PackageReference Include="DelegateDecompiler.EntityFrameworkCore5" Version="0.28.2" /&gt; + &lt;PackageReference Include="FluentValidation.AspNetCore" Version="9.3.0" /&gt; + &lt;PackageReference Include="HtmlTags" Version="8.1.1" /&gt; &lt;PackageReference Include="MediatR" Version="9.0.0" /&gt; &lt;PackageReference Include="MediatR.Extensions.Microsoft.DependencyInjection" Version="9.0.0" /&gt; &lt;PackageReference Include="MiniProfiler.AspNetCore.Mvc" Version="4.2.1" /&gt; &lt;PackageReference Include="MiniProfiler.EntityFrameworkCore" Version="4.2.1" /&gt; - &lt;PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="3.1.9" /&gt; - &lt;PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="3.1.9"&gt; + &lt;PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="5.0.0" /&gt; + &lt;PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="5.0.0"&gt; &lt;PrivateAssets&gt;all&lt;/PrivateAssets&gt; &lt;IncludeAssets&gt;runtime; build; native; contentfiles; analyzers; buildtransitive&lt;/IncludeAssets&gt; &lt;/PackageReference&gt; - &lt;PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="3.1.9" /&gt; - &lt;PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="3.1.4" /&gt; + &lt;PackageReference Include="Microsoft.Extensions.Logging.Debug" Version="5.0.0" /&gt; + &lt;PackageReference Include="Microsoft.VisualStudio.Web.CodeGeneration.Design" Version="5.0.0" /&gt; &lt;/ItemGroup&gt;</code></pre><p>This was as simple as updating my packages to the latest version. From here, I compiled, ran the build, and ran the app. Everything worked as expected.</p><p>Some folks are put off by the fact that 5.0 is not "LTS" but that really shouldn't stop you from using it - you can always upgrade to the next LTS when it comes out.</p><p>One package I did need to change was the <code>DelegateDecompiler.EntityFrameworkCore</code> one, and this led to <a href="https://jimmybogard.com/mind-your-strings-with-net-5-0/">my last post</a> and finally a new package that explicitly targets EF Core 5. This is because EF Core changed an API that broke the existing package, so  it was easier to create a new package than make the multi-targeting more complicated. With all the packages and runtime update, I next wanted to see how C# 9 might improve things.</p><h3 id="updating-to-c-9">Updating to C# 9</h3><p>The runtime upgrade on a vanilla ASP.NET Core 3.1 application brings with it lots of performance improvements, but I was <em>really</em> looking forward to the C# 9 improvements (which many technically work on lower versions, but that's a different story).</p><p><a href="https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-9">C# 9 has a lot major and minor improvements</a>, but a few I was really interested in:</p><ul><li>Records</li><li>Init-only setters</li><li>Pattern-matching</li><li>Static lambda functions</li></ul><p>First up are records. I wasn't quite sure if records were supported everywhere in my application, but I wanted to look at them for DTOs, which were all the request/response objects used with MediatR:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/11/191656_image.png" class="kg-image"><figcaption>Request and responses with handlers</figcaption></figure><p>Incoming request objects use ASP.NET Core data binding, <a href="https://docs.microsoft.com/en-us/aspnet/core/migration/31-to-50?view=aspnetcore-5.0&amp;tabs=visual-studio#complexobjectmodelbinderprovider--complexobjectmodelbinder-replace-complextypemodelbinderprovider--complextypemodelbinder">which <em>does</em> support records</a>, and outgoing objects use AutoMapper with <code>ProjectTo</code>, so I needed to understand if EF Core 5 supported record types for LINQ projections.</p><p>It should however, as I've tested LINQ expressions and compilation and reflection, and they all still work with record types. Converting my DTOs to record types was simple, but I did need to decide between positional vs. nominal record types. I decided to go with nominal as I found it easier to read:</p><pre><code class="language-csharp">public record Query : IRequest&lt;Command&gt; { public int? Id { get; init; } } public record Command : IRequest { public int Id { get; init; } public string LastName { get; init; } [Display(Name = "First Name")] public string FirstMidName { get; init; } public DateTime? EnrollmentDate { get; init; } } </code></pre><p>I changed <code>class</code> to <code>record</code> and <code>set</code> to <code>init</code>, and now I've got an "immutable" record type. From there, I needed to fix some compile errors which were mainly around mutating DTOs. That wound up making the code easier to understand/follow, because instead of creating the DTO and mutating several times, I could either gather all the data I needed initially, or as part of the initialization expression:</p><pre><code class="language-csharp">var results = await students .ProjectTo&lt;Model&gt;(_configuration) .PaginatedListAsync(pageNumber, pageSize); var model = new Result { CurrentSort = message.SortOrder, NameSortParm = string.IsNullOrEmpty(message.SortOrder) ? "name_desc" : "", DateSortParm = message.SortOrder == "Date" ? "date_desc" : "Date", CurrentFilter = searchString, SearchString = searchString, Results = results }; </code></pre><p>Initially this code would instantiate the <code>Result</code> object, mutate it a few places, then return it. I found the places that mutated the DTOs to be much more confusing, which I'm sure the functional folks are scoffing at.</p><p>Less impactful were the pattern matching with switch expressions, but it did provide some improvement:</p><pre><code class="language-csharp">//before students = message.SortOrder switch { case "name_desc": students = students.OrderByDescending(s =&gt; s.LastName); break; case "Date": students = students.OrderBy(s =&gt; s.EnrollmentDate); break; case "date_desc": students = students.OrderByDescending(s =&gt; s.EnrollmentDate); break; default: // Name ascending students = students.OrderBy(s =&gt; s.LastName); break; } //after students = message.SortOrder switch "name_desc" =&gt; students.OrderByDescending(s =&gt; s.LastName), "Date" =&gt; students.OrderBy(s =&gt; s.EnrollmentDate), "date_desc" =&gt; students.OrderByDescending(s =&gt; s.EnrollmentDate), _ =&gt; students.OrderBy(s =&gt; s.LastName) } </code></pre><p>It's small, but removes a lot of the noise of the code. Finally, I tried to use the static lambdas as much as I could when the lambda didn't capture any closure variables (or, shouldn't).</p><p>All in all, a very easy migration. Next, I'll be updating my OSS to C# 9 where I can (without using breaking features).</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=68xN75TCsLM:PIMYBHuqYDo:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=68xN75TCsLM:PIMYBHuqYDo:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=68xN75TCsLM:PIMYBHuqYDo:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=68xN75TCsLM:PIMYBHuqYDo:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=68xN75TCsLM:PIMYBHuqYDo:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/68xN75TCsLM" height="1" width="1" alt=""/> Mind Your Strings with .NET 5.0 https://jimmybogard.com/mind-your-strings-with-net-5-0/ Jimmy Bogard urn:uuid:82e5f8b9-48b0-ce1f-e850-77b34e6016c2 Tue, 27 Oct 2020 14:05:20 +0000 <p>In the process of <a href="https://github.com/hazzik/DelegateDecompiler/pull/164">upgrading a library to support .NET 5.0</a>, I ran into a rather bizarre test failure. Simply by adding a new target of <code>net5.0</code> in the library and unit tests, single test started to fail. It was absolutely baffling to me, leading my to think</p> <p>In the process of <a href="https://github.com/hazzik/DelegateDecompiler/pull/164">upgrading a library to support .NET 5.0</a>, I ran into a rather bizarre test failure. Simply by adding a new target of <code>net5.0</code> in the library and unit tests, single test started to fail. It was absolutely baffling to me, leading my to think that I may have somehow broken strings. Instead, I got to learn waaaaaay more about strings, Unicode, and comparison strategies than I ever really cared to.</p><p>But for those migrating to .NET 5.0, you'll want to mind your string methods quite closely.</p><h3 id="the-faulty-assertion">The Faulty Assertion</h3><p>It started off with a test failure:</p><pre><code class="language-csharp">[Test] public void TestDetailOneGroupTwoClassesSupported() { //SETUP MasterEnvironment.ResetLogging(); var classLog1 = new ClassLog(@"TestGroup01UnitTestGroup\Test01MyUnitTest1"); MasterEnvironment.AddClassLog(classLog1); example2Supported.ForEach(classLog1.MethodLogs.Add); var classLog2 = new ClassLog(@"TestGroup01UnitTestGroup\Test01MyUnitTest2"); MasterEnvironment.AddClassLog(classLog2); example2Supported.ForEach(classLog2.MethodLogs.Add); //ATTEMPT var markup = MasterEnvironment.ResultsAsMarkup(OutputVersions.Detail); //VERIFY markup.ShouldStartWith("Detail"); markup.ShouldContain("Group: Unit Test Group"); markup.ShouldContain("\n#### [My Unit Test1]("); markup.ShouldContain("\n#### [My Unit Test2]("); // &lt;- this blew up markup.ShouldContain("\n- Supported\n * Good1 (line 1)\n * Good2 (line 2)"); } </code></pre><p>The next to last assertion blew up on .NET 5.0 but worked on all other platforms tested (.NET Core 2.0, 3.0, 3.1, .NET 4.5). I first verified that the value to test was exactly the same (it was), then looked at the assertion. This library uses NUnit, and eventually makes a comparison using <code>string.IndexOf(value, StringComparison.CurrentCulture)</code>. As I was debugging, I put the values into the watch window and got some rather strange results:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/images/2020/10/271333_image.png" class="kg-image"><figcaption>IndexOf not matching Contains</figcaption></figure><p>Well that's a bit disconcerting - I ask "do you contain this value". "Yes". "Where". "I dunno". How can this be? I went back to .NET Core 3.1, and saw that the two methods were consistent, it said the value was contained AND it found the right place.</p><p>So I didn't break strings, but SOMETHING was different here. I <a href="https://github.com/dotnet/runtime/issues/43736">opened a GitHub issue</a> to understand why <a href="https://twitter.com/jbogard/status/1319374415973535746?s=20">this behavior seemingly broke</a>.</p><h3 id="unicode-strikes-again">Unicode Strikes Again</h3><p>Underneath the covers, <code>IndexOf</code> and <code>Contains</code> do subtly different things, it turns out. While <code>IndexOf</code> is an ordinal search, <code>Contains</code> is a <em>linguistic</em> search, which, depends on what kind of linguistic searching strategy I would use. NUnit using <code>CurrentCulture</code> means it's searching linguistically (the default). But why would the result change based on just upgrading the target platform? Why do I see such different behavior of strings based on the platform?</p><!--kg-card-begin: markdown--><table> <thead> <tr> <th>Method</th> <th><code>netcoreapp3.1</code></th> <th><code>net5.0</code></th> </tr> </thead> <tbody> <tr> <td><code>actual.Contains(expected)</code></td> <td>True</td> <td>True</td> </tr> <tr> <td><code>actual.IndexOf(expected)</code></td> <td>1475</td> <td>-1</td> </tr> <tr> <td><code>actual.Contains(expected, StringComparison.CurrentCulture)</code></td> <td>True</td> <td>False</td> </tr> <tr> <td><code>actual.IndexOf(expected, StringComparison.CurrentCulture)</code></td> <td>1475</td> <td>-1</td> </tr> <tr> <td><code>actual.Contains(expected, StringComparison.Ordinal)</code></td> <td>True</td> <td>True</td> </tr> <tr> <td><code>actual.IndexOf(expected, StringComparison.Ordinal)</code></td> <td>1475</td> <td>1475</td> </tr> <tr> <td><code>actual.Contains(expected, StringComparison.InvariantCulture)</code></td> <td>True</td> <td>False</td> </tr> <tr> <td><code>actual.IndexOf(expected, StringComparison.InvariantCulture)</code></td> <td>1475</td> <td>-1</td> </tr> </tbody> </table> <!--kg-card-end: markdown--><p>It turns out my string I was searching was....weird. It contained quite a few instances of <code>Lorem Ipsum\n\r\ndolor sit amet</code>, where it's mixing different line ending styles, then searching <code>\ndolor</code>. That would split the <code>\r\n</code> value into two.</p><p>So what changed? It turns out for globalization services, the Windows default is <a href="https://docs.microsoft.com/en-us/windows/win32/intl/national-language-support">Natural Language Support (NLS)</a>. But for Unix-based platform, that standard is <a href="http://site.icu-project.org/home">International Components for Unicode (ICU)</a>. The switch in .NET 5.0 is to move towards this more widely-accepted standard, ICU. <a href="https://docs.microsoft.com/en-us/dotnet/standard/globalization-localization/globalization-icu">ICU will be part of Windows</a> and can be consumed using a NuGet package, but in my code, the behavior was <em>always</em> different between Unix and Windows. If I ran .NET Core 3.1 on Unix, I'd get the same result as .NET 5.0.</p><p>So how to fix this?</p><p>Back on the table above, we can see that an <code>Ordinal</code> comparison results in consistent behavior, and searching behavior I actually want. I changed my test to use <code>Ordinal</code> and everything passed.</p><p>But what should <em>you</em> do? For a start do NOT rely on any string methods that do not explicitly pass in a <code>StringComparison</code> value. Unless you can readily pull up the underlying source code, it's not obvious what the default string comparison value is, and this default is not consistent across <code>string</code> methods.</p><p>From there, I recommend <a href="https://docs.microsoft.com/en-us/dotnet/standard/base-types/best-practices-strings#recommendations-for-string-usage">following the Microsoft guidelines for string usage</a>. If you're upgrading to .NET 5.0, be aware of ICU and <a href="https://docs.microsoft.com/en-us/dotnet/standard/globalization-localization/globalization-icu">how to turn it off if it's a problem</a>.</p><p>For more reading, check out the <a href="https://github.com/dotnet/runtime/issues/43736">original GitHub issue</a>, and you'll learn all about exciting words such as "grapheme".</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=m3d9p_Dm3kY:-9g12Q3n2fc:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=m3d9p_Dm3kY:-9g12Q3n2fc:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=m3d9p_Dm3kY:-9g12Q3n2fc:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=m3d9p_Dm3kY:-9g12Q3n2fc:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=m3d9p_Dm3kY:-9g12Q3n2fc:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/m3d9p_Dm3kY" height="1" width="1" alt=""/> MediatR 9.0 Released https://jimmybogard.com/mediatr-9-0-released/ Jimmy Bogard urn:uuid:f2fd4999-002b-abf8-a904-6c6ac5629dae Fri, 09 Oct 2020 13:05:01 +0000 <p>A small release, from the <a href="https://github.com/jbogard/MediatR/releases/tag/v9.0.0">release notes</a>:</p><p>This release contains a small, but breaking change. In order to provide a simpler interface, the <code>IMediator</code> interface is now split into two interfaces, a sender and publisher:</p><pre><code class="language-csharp">public interface ISender { Task&lt;TResponse&gt; Send&lt;TResponse&gt;(IRequest&lt;TResponse&gt;</code></pre> <p>A small release, from the <a href="https://github.com/jbogard/MediatR/releases/tag/v9.0.0">release notes</a>:</p><p>This release contains a small, but breaking change. In order to provide a simpler interface, the <code>IMediator</code> interface is now split into two interfaces, a sender and publisher:</p><pre><code class="language-csharp">public interface ISender { Task&lt;TResponse&gt; Send&lt;TResponse&gt;(IRequest&lt;TResponse&gt; request, CancellationToken cancellationToken = default); Task&lt;object?&gt; Send(object request, CancellationToken cancellationToken = default); } public interface IPublisher { Task Publish(object notification, CancellationToken cancellationToken = default); Task Publish&lt;TNotification&gt;(TNotification notification, CancellationToken cancellationToken = default) where TNotification : INotification; } public interface IMediator : ISender, IPublisher { } </code></pre><p>The main motivation here is that sending should be a top-level concern of an application, but publishing can happen anywhere. This interface segregation should help catch design errors, where should never send requests from anywhere inside a request handler.</p><p>I've also updated the <a href="https://www.nuget.org/packages/MediatR.Extensions.Microsoft.DependencyInjection/">MediatR.Extensions.Microsoft.DependencyInjection</a> package to 9.0 as well. Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=q7TLAjeUJrg:HnzaxRPviFA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=q7TLAjeUJrg:HnzaxRPviFA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=q7TLAjeUJrg:HnzaxRPviFA:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=q7TLAjeUJrg:HnzaxRPviFA:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=q7TLAjeUJrg:HnzaxRPviFA:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/q7TLAjeUJrg" height="1" width="1" alt=""/> Picking a Web Microframework https://lostechies.com/ryansvihla/2020/05/27/picking-a-microframework/ Los Techies urn:uuid:b75786e5-449a-a555-4277-5555fd14eb08 Wed, 27 May 2020 00:23:00 +0000 I’ve had to use this at work the last couple of weeks. We had a “home grown” framework for a new application we’re working and the first thing I did was try and rip that out (new project so didn’t have URL and parameter sanitization anyway to do routes, etc). <p>I’ve had to use this at work the last couple of weeks. We had a “home grown” framework for a new application we’re working and the first thing I did was try and rip that out (new project so didn’t have URL and parameter sanitization anyway to do routes, etc).</p> <p>However, being that the group I was working with is pretty “anti framework” I had to settle on something that was light weight, integrated with jetty and allowed us to work the way that was comfortable for us as team (also it had to work with Scala).</p> <h2 id="microframeworks">Microframeworks</h2> <p>The team had shown a lot of disdain for Play (which I had actually quite a lot when I last was leading a JVM based tech stack) and Spring Boot as being too heavy weight, so these were definitely out.</p> <p>Fortunately, in the JVM world there is a big push back now on heavy web frameworks so meant I had lots of choices for “non frameworks” but could still do some basic security, routing, authentication but not hurt the existing team’s productivity.</p> <p>There are probably 3 dozen microframeworks to choose from with varying degrees of value but the two that seemed to easiest to start with today were:</p> <ul> <li><a href="https://scalatra.org">Scalatra</a></li> <li><a href="https://javalin.io">Javalin</a></li> <li><a href="https://quarkus.io">Quarkus</a></li> </ul> <h3 id="my-attempt-with-quarkus">My Attempt with Quarkus</h3> <p><a href="https://quarkus.io/">Quarkus</a> has a really great getting started story but it’s harder to get started on an existing project with it, it was super trivial to add, and after a couple of days of figuring out the magic incantation I just decided to punt on it. I think because of it’s popularity in the Cloud Native space (which we’re trying to target), the backing of <a href="https://developers.redhat.com/blog/2019/03/07/quarkus-next-generation-kubernetes-native-java-framework/">Red Hat</a>, and the pluggable nature of the stack there are a lot of reasons to want this to work. In the end because of the timeline it didn’t make the cut. But it may come back.</p> <h3 id="my-attempt-with-javalin">My Attempt with Javalin</h3> <p>Javalin despite being a less popular project than Quarkus it is getting some buzz. It also looks like it just slides into the team’s existing Servlet code base. I wanted this to work very badly but stopped before I even started because of <a href="https://github.com/tipsy/javalin/issues/931">this issue</a> so this was out despite being on paper a really execellent framework.</p> <h3 id="my-attempt-with-scalatra">My Attempt with Scalatra</h3> <p><a href="https://scalatra.org/">Scalatra</a> has been around for a number of years and is inspired by <a href="http://sinatrarb.com/">Sinatra</a> which I used quite a bit in my Ruby years. This took a few minutes to get going just following their <a href="https://scalatra.org/guides/2.7/deployment/standalone.html">standalone directions</a> and then some more to successful convert the routes and account for learning curves with routes.</p> <p>Some notes:</p> <ul> <li>The routing API and parameters etc are very nice to work with IMO.</li> <li>It was <a href="https://scalatra.org/guides/2.7/formats/json.html">very easy</a> to get json by default support setup.</li> <li>Metrics were <a href="https://scalatra.org/guides/2.7/monitoring/metrics.html">very easy</a> to wire up.</li> <li>Swagger integration was pretty rough, while it looks good on paper I could not get an example to show up, and it is unable to <a href="https://github.com/scalatra/scalatra/issues/343">handle case classes or enums</a> which we use.</li> <li>Benchmark performance when I’ve <a href="https://johnykov.github.io/bootzooka-akka-http-vs-scalatra.html">looked</a> around the web was pretty bad, I’ve not done enough to figure out if this is real or not. I’ve seen first hand a lot of benchmarking are just wrong.</li> <li>Integration with JUnit has been rough and I cannot seem to get the correct port to fire, I suspect I have to stop using the @Test annotation is all (which I’m not enjoying).</li> <li>Http/2 support is still lacking despite being available in the version of Jetty they’re on, I’ve read a few places that an issue is keeping <a href="https://github.com/eclipse/jetty.project/issues/1364">web sockets working</a> but either way there is <a href="https://github.com/scalatra/scalatra/issues/757">no official support in the project yet</a>.</li> </ul> <h2 id="conclusion">Conclusion</h2> <p>I think we’re going to stick with Scalatra for the time being as it is a muture framework that works well for our current goals. However, the lack of http/2 support maybe a deal breaker in the medium term.</p> Getting started with Cassandra: Data modeling in the brief https://lostechies.com/ryansvihla/2020/02/05/getting-started-cassandra-part-3/ Los Techies urn:uuid:faeba5a6-db95-bc14-4f6f-333e146885f1 Wed, 05 Feb 2020 20:23:00 +0000 Cassandra data modeling isn’t really something you can do “in the brief” and is itself a subject that can take years to fully grasp, but this should be a good starting point. <p>Cassandra data modeling isn’t really something you can do “in the brief” and is itself a subject that can take years to fully grasp, but this should be a good starting point.</p> <h2 id="introduction">Introduction</h2> <p>Cassandra distributes data around the cluster via the <em>partition</em> <em>key</em>.</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> </code></pre></div></div> <p>In the above table the <em>partition</em> <em>key</em> is <code class="language-plaintext highlighter-rouge">postal_code</code> and the <em>clustering</em> <em>column</em> is<code class="language-plaintext highlighter-rouge">id</code>. The <em>partition</em> <em>key</em> will locate the data on the cluster for us. The clustering column allows us multiple rows per <em>partition</em> <em>key</em> so that we can filter how much data we read per partition. The ‘optimal’ query is one that retrieves data from only one node and not so much data that GC pressure or latency issues result. The following query is breaking that rule and retrieving 2 partitions at once via the IN parameter.</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="k">IN</span> <span class="p">(</span><span class="s1">'77002'</span><span class="p">,</span> <span class="s1">'77043'</span><span class="p">);</span> </code></pre></div></div> <p>This <em>can</em> <em>be</em> slower than doing two separate queries asynchronously, especially if those partitions are on two different nodes (imagine if there are 1000+ partitions in the IN statement). In summary, the simple rule to stick to is “1 partition per query”.</p> <h3 id="partition-sizes">Partition sizes</h3> <p>A common mistake when data modeling is to jam as much data as possible into a single partition.</p> <ul> <li>This doesn’t distribute the data well and therefore misses the point of a distributed database.</li> <li>There are practical limits on the <a href="https://issues.apache.org/jira/browse/CASSANDRA-9754">performance of partition sizes</a></li> </ul> <h3 id="table-per-query-pattern">Table per query pattern</h3> <p>A common approach to optimize around partition lookup is to create a table per query, and write to all of them on update. The following example has two related tables both to solve two different queries</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">--query by postal_code</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="o">=</span> <span class="s1">'77002'</span><span class="p">;</span> <span class="c1">--query by id</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="p">(</span><span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">name</span> <span class="nb">text</span><span class="p">,</span> <span class="n">address</span> <span class="nb">text</span><span class="p">,</span> <span class="n">city</span> <span class="nb">text</span><span class="p">,</span> <span class="k">state</span> <span class="nb">text</span><span class="p">,</span> <span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">country</span> <span class="nb">text</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">;</span> </code></pre></div></div> <p>You can update both tables at once with a logged batch:</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">BEGIN</span> <span class="n">BATCH</span> <span class="k">INSERT</span> <span class="k">INTO</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="p">(</span><span class="n">id</span><span class="p">,</span> <span class="n">name</span><span class="p">,</span> <span class="n">address</span><span class="p">,</span> <span class="n">city</span><span class="p">,</span> <span class="k">state</span><span class="p">,</span> <span class="n">postal_code</span><span class="p">,</span> <span class="n">country</span><span class="p">,</span> <span class="n">balance</span><span class="p">)</span> <span class="k">VALUES</span> <span class="p">(</span><span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">,</span> <span class="s1">'Bordeaux'</span><span class="p">,</span> <span class="s1">'Gironde'</span><span class="p">,</span> <span class="s1">'33000'</span><span class="p">,</span> <span class="s1">'France'</span><span class="p">,</span> <span class="mi">56</span><span class="p">.</span><span class="mi">20</span><span class="p">);</span> <span class="k">INSERT</span> <span class="k">INTO</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">,</span> <span class="n">balance</span><span class="p">)</span> <span class="k">VALUES</span> <span class="p">(</span><span class="s1">'33000'</span><span class="p">,</span> <span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">,</span> <span class="mi">56</span><span class="p">.</span><span class="mi">20</span><span class="p">)</span> <span class="p">;</span> <span class="n">APPLY</span> <span class="n">BATCH</span><span class="p">;</span> </code></pre></div></div> <h3 id="source-of-truth">Source of truth</h3> <p>A common design pattern is to have one table act as the authoritative one over data, and if for some reason there is a mismatch or conflict in other tables as long as there is one considered “the source of truth” it makes it easy to fix any conflicts later. This is typically the table that would match what we see in typical relational databases and has all the data needed to generate all related views or indexes for different query methods. Taking the prior example, <code class="language-plaintext highlighter-rouge">my_table</code> is the source of truth:</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">--source of truth table</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="p">(</span><span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">name</span> <span class="nb">text</span><span class="p">,</span> <span class="n">address</span> <span class="nb">text</span><span class="p">,</span> <span class="n">city</span> <span class="nb">text</span><span class="p">,</span> <span class="k">state</span> <span class="nb">text</span><span class="p">,</span> <span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">country</span> <span class="nb">text</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="mi">7895</span><span class="n">c6ff</span><span class="o">-</span><span class="mi">008</span><span class="n">b</span><span class="o">-</span><span class="mi">4</span><span class="n">e4c</span><span class="o">-</span><span class="n">b0ff</span><span class="o">-</span><span class="n">ba4e4e099326</span><span class="p">;</span> <span class="c1">--based on my_key.my_table and so we can query by postal_code</span> <span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="o">=</span> <span class="s1">'77002'</span><span class="p">;</span> </code></pre></div></div> <p>Next we discuss strategies for keeping tables of related in sync.</p> <h3 id="materialized-views">Materialized views</h3> <p>Materialized views are a feature that ships with Cassandra but is currently considered rather experimental. If you want to use them anyway:</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="n">MATERIALIZED</span> <span class="k">VIEW</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code</span> <span class="k">AS</span> <span class="k">SELECT</span> <span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="n">balance</span> <span class="nb">float</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="k">IS</span> <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">AND</span> <span class="n">id</span> <span class="k">IS</span> <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> </code></pre></div></div> <p>Materialized views at least run faster than the comparable BATCH insert pattern, but they have a number of bugs and known issues that are still pending fixes.</p> <h3 id="secondary-indexes">Secondary indexes</h3> <p>This are the original server side approach to handling different query patterns but it has a large number of downsides:</p> <ul> <li>rows are read serially one node at time until limit is reached.</li> <li>a suboptimal storage layout leading to very large partitions if the data distribution of the secondary index is not ideal.</li> </ul> <p>For just those two reasons I think it’s rare that one can use secondary indexes and expect reasonable performance. However, you can make one by hand and just query that data asynchronously to avoid some of the downsides.</p> <div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code_2i</span> <span class="p">(</span><span class="n">postal_code</span> <span class="nb">text</span><span class="p">,</span> <span class="n">id</span> <span class="n">uuid</span><span class="p">,</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">(</span><span class="n">postal_code</span><span class="p">,</span> <span class="n">id</span><span class="p">));</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table_by_postal_code_2i</span> <span class="k">WHERE</span> <span class="n">postal_code</span> <span class="o">=</span> <span class="s1">'77002'</span><span class="p">;</span> <span class="c1">--retrieve all rows then asynchronously query the resulting ids</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="n">ad004ff2</span><span class="o">-</span><span class="n">e5cb</span><span class="o">-</span><span class="mi">4245</span><span class="o">-</span><span class="mi">94</span><span class="n">b8</span><span class="o">-</span><span class="n">d6acbc22920a</span><span class="p">;</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="n">d30e9c65</span><span class="o">-</span><span class="mi">17</span><span class="n">a1</span><span class="o">-</span><span class="mi">44</span><span class="n">da</span><span class="o">-</span><span class="n">bae0</span><span class="o">-</span><span class="n">b7bb742eefd6</span><span class="p">;</span> <span class="k">SELECT</span> <span class="o">*</span> <span class="k">FROM</span> <span class="n">my_key</span><span class="p">.</span><span class="n">my_table</span> <span class="k">WHERE</span> <span class="n">id</span> <span class="o">=</span> <span class="n">e016ae43</span><span class="o">-</span><span class="mi">3</span><span class="n">d4e</span><span class="o">-</span><span class="mi">4093</span><span class="o">-</span><span class="n">b745</span><span class="o">-</span><span class="mi">8583627</span><span class="n">eb1fe</span><span class="p">;</span> </code></pre></div></div> <h2 id="exercises">Exercises</h2> <h3 id="contact-list">Contact List</h3> <p>This is a good basic first use case as one needs to use multiple tables for the same data, but there should not be too many.</p> <h4 id="requirements">requirements</h4> <ul> <li>contacts should have first name, last name, address, state/region, country, postal code</li> <li>lookup by contacts id</li> <li>retrieve all contacts by a given last name</li> <li>retrieve counts by zip code</li> </ul> <h3 id="music-service">Music Service</h3> <p>Takes the basics from the previous exercise and requires a more involved understanding of the concepts. It will require many tables and some difficult trade-offs on partition sizing. There is no one correct way to do this.</p> <h4 id="requirements-1">requirements</h4> <ul> <li>songs should have album, artist, name, and total likes</li> <li>The contact list exercise, can be used as a basis for the “users”, users will have no login because we’re trusting people</li> <li>retrieve all songs by artist</li> <li>retrieve all songs in an album</li> <li>retrieve individual song and how many times it’s been liked</li> <li>retrieve all liked songs for a given user</li> <li>“like” a song</li> <li>keep a count of how many times a song has been listened to by all users</li> </ul> <h3 id="iot-analytics">IoT Analytics</h3> <p>This will require some extensive time series modeling and takes some of the lessons from the Music Service further. The table(s) used will be informed by the query.</p> <h4 id="requirements-2">requirements</h4> <ul> <li>use the music service data model as a basis, we will be tracking each “registered device” that uses the music service</li> <li>a given user will have 1-5 devices</li> <li>log all songs listened to by a given device</li> <li>retrieve songs listened for a device by day</li> <li>retrieve songs listened for a device by month</li> <li>retrieve total listen time for a device by day</li> <li>retrieve total listen time for a device by month</li> <li>retrieve artists listened for a device by day</li> <li>retrieve artists listened for a device by month</li> </ul> Getting started with Cassandra: Load testing Cassandra in brief https://lostechies.com/ryansvihla/2020/02/04/getting-started-cassandra-part-2/ Los Techies urn:uuid:c943ad17-c8c9-9027-730e-494f4fdb5d29 Tue, 04 Feb 2020 20:23:00 +0000 An opinionated guide on the “correct” way to load test Cassandra. I’m aiming to keep this short so I’m going to leave out a lot of the nuance that one would normally get into when talking about load testing cassandra. <p>An opinionated guide on the “correct” way to load test Cassandra. I’m aiming to keep this short so I’m going to leave out a <em>lot</em> of the nuance that one would normally get into when talking about load testing cassandra.</p> <h2 id="if-you-have-no-data-model-in-mind">If you have no data model in mind</h2> <p>Use cassandra stress since it’s around:</p> <ul> <li>first initialize the keyspace with RF3 <code class="language-plaintext highlighter-rouge">cassandra-stress "write cl=ONE no-warmup -col size=FIXED(15000) -schema replication(strategy=SimpleStrategy,factor=3)"</code></li> <li>second run stress <code class="language-plaintext highlighter-rouge">cassandra-stress "mixed n=1000k cl=ONE -col size=FIXED(15000)</code></li> <li>repeat as often as you’d like with as many clients as you want.</li> </ul> <h2 id="if-you-have-a-specific-data-model-in-mind">If you have a specific data model in mind</h2> <p>You can use cassandra-stress, but I suspect you’re going to find your data model isn’t supported (collections for example) or that you don’t have the required PHD to make it work the way you want. There are probably 2 dozen options from here you can use to build your load test, some of the more popular ones are gatling, jmeter, and tlp-stress. My personal favorite for this though, write a small simple python or java program that replicates your use case accurately in your own code, using a faker library to generate your data. This takes more time but you tend to have less surprises in production as it will accurately model your code.</p> <h3 id="small-python-script-with-python-driver">Small python script with python driver</h3> <ul> <li>use python3 and virtualenv</li> <li><code class="language-plaintext highlighter-rouge">python -m venv venv</code></li> <li>source venv/bin/activate</li> <li>read and follow install <a href="https://docs.datastax.com/en/developer/python-driver/3.21/getting_started/">docs</a></li> <li>if you want to skip the docs you can get away with <code class="language-plaintext highlighter-rouge">pip install cassandra-driver</code></li> <li>install a faker library <code class="language-plaintext highlighter-rouge">pip install Faker</code></li> </ul> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">argparse</span> <span class="kn">import</span> <span class="nn">uuid</span> <span class="kn">import</span> <span class="nn">time</span> <span class="kn">import</span> <span class="nn">random</span> <span class="kn">from</span> <span class="nn">cassandra.cluster</span> <span class="kn">import</span> <span class="n">Cluster</span> <span class="kn">from</span> <span class="nn">cassandra.query</span> <span class="kn">import</span> <span class="n">BatchStatement</span> <span class="kn">from</span> <span class="nn">faker</span> <span class="kn">import</span> <span class="n">Faker</span> <span class="n">parser</span> <span class="o">=</span> <span class="n">argparse</span><span class="p">.</span><span class="n">ArgumentParser</span><span class="p">(</span><span class="n">description</span><span class="o">=</span><span class="s">'simple load generator for cassandra'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--hosts'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="s">'127.0.0.1'</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">str</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'comma separated list of hosts to use for contact points'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--port'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">9042</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'port to connect to'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--trans'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">1000000</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'number of transactions'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--inflight'</span><span class="p">,</span> <span class="n">default</span><span class="o">=</span><span class="mi">25</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'number of operations in flight'</span><span class="p">)</span> <span class="n">parser</span><span class="p">.</span><span class="n">add_argument</span><span class="p">(</span><span class="s">'--errors'</span><span class="p">,</span> <span class="n">default</span><span class="o">=-</span><span class="mi">1</span><span class="p">,</span> <span class="nb">type</span><span class="o">=</span><span class="nb">int</span><span class="p">,</span> <span class="n">help</span><span class="o">=</span><span class="s">'number of errors before stopping. default is unlimited'</span><span class="p">)</span> <span class="n">args</span> <span class="o">=</span> <span class="n">parser</span><span class="p">.</span><span class="n">parse_args</span><span class="p">()</span> <span class="n">fake</span> <span class="o">=</span> <span class="n">Faker</span><span class="p">([</span><span class="s">'en-US'</span><span class="p">])</span> <span class="n">hosts</span> <span class="o">=</span> <span class="n">args</span><span class="p">.</span><span class="n">hosts</span><span class="p">.</span><span class="n">split</span><span class="p">(</span><span class="s">","</span><span class="p">)</span> <span class="n">cluster</span> <span class="o">=</span> <span class="n">Cluster</span><span class="p">(</span><span class="n">hosts</span><span class="p">,</span> <span class="n">port</span><span class="o">=</span><span class="n">args</span><span class="p">.</span><span class="n">port</span><span class="p">)</span> <span class="k">try</span><span class="p">:</span> <span class="n">session</span> <span class="o">=</span> <span class="n">cluster</span><span class="p">.</span><span class="n">connect</span><span class="p">()</span> <span class="k">print</span><span class="p">(</span><span class="s">"setup schema"</span><span class="p">);</span> <span class="n">session</span><span class="p">.</span><span class="n">execute</span><span class="p">(</span><span class="s">"CREATE KEYSPACE IF NOT EXISTS my_key WITH REPLICATION = {'class': 'SimpleStrategy', 'replication_factor': 1}"</span><span class="p">)</span> <span class="n">session</span><span class="p">.</span><span class="n">execute</span><span class="p">(</span><span class="s">"CREATE TABLE IF NOT EXISTS my_key.my_table (id uuid, name text, address text, state text, zip text, balance int, PRIMARY KEY(id))"</span><span class="p">)</span> <span class="n">session</span><span class="p">.</span><span class="n">execute</span><span class="p">(</span><span class="s">"CREATE TABLE IF NOT EXISTS my_key.my_table_by_zip (zip text, id uuid, balance bigint, PRIMARY KEY(zip, id))"</span><span class="p">)</span> <span class="k">print</span><span class="p">(</span><span class="s">"allow schema to replicate throughout the cluster for 30 seconds"</span><span class="p">)</span> <span class="n">time</span><span class="p">.</span><span class="n">sleep</span><span class="p">(</span><span class="mi">30</span><span class="p">)</span> <span class="k">print</span><span class="p">(</span><span class="s">"prepare queries"</span><span class="p">)</span> <span class="n">insert</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"INSERT INTO my_key.my_table (id, name, address, state, zip, balance) VALUES (?, ?, ?, ?, ?, ?)"</span><span class="p">)</span> <span class="n">insert_rollup</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"INSERT INTO my_key.my_table_by_zip (zip, id, balance) VALUES (?, ?, ?)"</span><span class="p">)</span> <span class="n">row_lookup</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"SELECT * FROM my_key.my_table WHERE id = ?"</span><span class="p">)</span> <span class="n">rollup</span> <span class="o">=</span> <span class="n">session</span><span class="p">.</span><span class="n">prepare</span><span class="p">(</span><span class="s">"SELECT sum(balance) FROM my_key.my_table_by_zip WHERE zip = ?"</span><span class="p">)</span> <span class="n">threads</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">ids</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">error_counter</span> <span class="o">=</span> <span class="mi">0</span> <span class="n">query</span> <span class="o">=</span> <span class="bp">None</span> <span class="n">params</span> <span class="o">=</span> <span class="p">[]</span> <span class="n">ids</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">def</span> <span class="nf">get_id</span><span class="p">():</span> <span class="n">items</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">ids</span><span class="p">)</span> <span class="k">if</span> <span class="n">items</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span> <span class="c1">## nothing present so return something random </span> <span class="k">return</span> <span class="n">uuid</span><span class="p">.</span><span class="n">uuid4</span><span class="p">()</span> <span class="k">if</span> <span class="n">items</span> <span class="o">==</span> <span class="mi">1</span><span class="p">:</span> <span class="k">return</span> <span class="n">ids</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> <span class="k">return</span> <span class="n">ids</span><span class="p">[</span><span class="n">random</span><span class="p">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">items</span> <span class="o">-</span><span class="mi">1</span><span class="p">)]</span> <span class="k">print</span><span class="p">(</span><span class="s">"starting transactions"</span><span class="p">)</span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">args</span><span class="p">.</span><span class="n">trans</span><span class="p">):</span> <span class="n">chance</span> <span class="o">=</span> <span class="n">random</span><span class="p">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">100</span><span class="p">)</span> <span class="k">if</span> <span class="n">chance</span> <span class="o">&gt;</span> <span class="mi">0</span> <span class="ow">and</span> <span class="n">chance</span> <span class="o">&lt;</span> <span class="mi">50</span><span class="p">:</span> <span class="n">new_id</span> <span class="o">=</span> <span class="n">uuid</span><span class="p">.</span><span class="n">uuid4</span><span class="p">()</span> <span class="n">ids</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">new_id</span><span class="p">)</span> <span class="n">state</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">state_abbr</span><span class="p">()</span> <span class="n">zip_code</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">zipcode_in_state</span><span class="p">(</span><span class="n">state</span><span class="p">)</span> <span class="n">balance</span> <span class="o">=</span> <span class="n">random</span><span class="p">.</span><span class="n">randint</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">50000</span><span class="p">)</span> <span class="n">query</span> <span class="o">=</span> <span class="n">BatchStatement</span><span class="p">()</span> <span class="n">name</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">name</span><span class="p">()</span> <span class="n">address</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">address</span><span class="p">()</span> <span class="n">bound_insert</span> <span class="o">=</span> <span class="n">insert</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">new_id</span><span class="p">,</span> <span class="n">fake</span><span class="p">.</span><span class="n">name</span><span class="p">(),</span> <span class="n">fake</span><span class="p">.</span><span class="n">address</span><span class="p">(),</span> <span class="n">state</span><span class="p">,</span> <span class="n">zip_code</span><span class="p">,</span> <span class="n">balance</span><span class="p">])</span> <span class="n">query</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">bound_insert</span><span class="p">)</span> <span class="n">bound_insert_rollup</span> <span class="o">=</span> <span class="n">insert_rollup</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">zip_code</span><span class="p">,</span> <span class="n">new_id</span><span class="p">,</span> <span class="n">balance</span><span class="p">])</span> <span class="n">query</span><span class="p">.</span><span class="n">add</span><span class="p">(</span><span class="n">bound_insert_rollup</span><span class="p">)</span> <span class="k">elif</span> <span class="n">chance</span> <span class="o">&gt;</span> <span class="mi">50</span> <span class="ow">and</span> <span class="n">chance</span> <span class="o">&lt;</span> <span class="mi">75</span><span class="p">:</span> <span class="n">query</span> <span class="o">=</span> <span class="n">row_lookup</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">get_id</span><span class="p">()])</span> <span class="k">elif</span> <span class="n">chance</span> <span class="o">&gt;</span> <span class="mi">75</span><span class="p">:</span> <span class="n">zip_code</span> <span class="o">=</span> <span class="n">fake</span><span class="p">.</span><span class="n">zipcode</span><span class="p">()</span> <span class="n">query</span> <span class="o">=</span> <span class="n">rollup</span><span class="p">.</span><span class="n">bind</span><span class="p">([</span><span class="n">zip_code</span><span class="p">])</span> <span class="n">threads</span><span class="p">.</span><span class="n">append</span><span class="p">(</span><span class="n">session</span><span class="p">.</span><span class="n">execute_async</span><span class="p">(</span><span class="n">query</span><span class="p">))</span> <span class="k">if</span> <span class="n">i</span> <span class="o">%</span> <span class="n">args</span><span class="p">.</span><span class="n">inflight</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span> <span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="n">threads</span><span class="p">:</span> <span class="k">try</span><span class="p">:</span> <span class="n">t</span><span class="p">.</span><span class="n">result</span><span class="p">()</span> <span class="c1">#we don't care about result so toss it </span> <span class="k">except</span> <span class="nb">Exception</span> <span class="k">as</span> <span class="n">e</span><span class="p">:</span> <span class="k">print</span><span class="p">(</span><span class="s">"unexpected exception %s"</span> <span class="o">%</span> <span class="n">e</span><span class="p">)</span> <span class="k">if</span> <span class="n">args</span><span class="p">.</span><span class="n">errors</span> <span class="o">&gt;</span> <span class="mi">0</span><span class="p">:</span> <span class="n">error_counter</span> <span class="o">=</span> <span class="n">error_counter</span> <span class="o">+</span> <span class="mi">1</span> <span class="k">if</span> <span class="n">error_counter</span> <span class="o">&gt;</span> <span class="n">args</span><span class="p">.</span><span class="n">errors</span><span class="p">:</span> <span class="k">print</span><span class="p">(</span><span class="s">"too many errors stopping. Consider raising --errors flag if this happens more quickly than you'd like"</span><span class="p">)</span> <span class="k">break</span> <span class="n">threads</span> <span class="o">=</span> <span class="p">[]</span> <span class="k">print</span><span class="p">(</span><span class="s">"submitted %i of %i transactions"</span> <span class="o">%</span> <span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">args</span><span class="p">.</span><span class="n">trans</span><span class="p">))</span> <span class="k">finally</span><span class="p">:</span> <span class="n">cluster</span><span class="p">.</span><span class="n">shutdown</span><span class="p">()</span> </code></pre></div></div> <h3 id="small-java-program-with-latest-java-driver">Small java program with latest java driver</h3> <ul> <li>download java 8</li> <li>create a command line application in your project technology of choice (I used maven in this example for no particularly good reason)</li> <li>download a faker lib like <a href="https://github.com/DiUS/java-faker">this one</a> and the <a href="https://github.com/datastax/java-driver">Cassandra java driver from DataStax</a> again using your preferred technology to do so.</li> <li>run the following code sample somewhere (set your RF and your desired queries and data model)</li> <li>use different numbers of clients at your cluster until you get enough “saturation” or the server stops responding.</li> </ul> <p><a href="https://github.com/rssvihla/simple_cassandra_load_test/tree/master/java/simple-cassandra-stress">See complete example</a></p> <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">package</span> <span class="nn">pro.foundev</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.lang.RuntimeException</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.lang.Thread</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.Locale</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.ArrayList</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.List</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.function.*</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.Random</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.UUID</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.util.concurrent.CompletionStage</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">java.net.InetSocketAddress</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.datastax.oss.driver.api.core.CqlSession</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.datastax.oss.driver.api.core.CqlSessionBuilder</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.datastax.oss.driver.api.core.cql.*</span><span class="o">;</span> <span class="kn">import</span> <span class="nn">com.github.javafaker.Faker</span><span class="o">;</span> <span class="kd">public</span> <span class="kd">class</span> <span class="nc">App</span> <span class="o">{</span> <span class="kd">public</span> <span class="kd">static</span> <span class="kt">void</span> <span class="nf">main</span><span class="o">(</span> <span class="nc">String</span><span class="o">[]</span> <span class="n">args</span> <span class="o">)</span> < Getting started with Cassandra: Setting up a Multi-DC environment https://lostechies.com/ryansvihla/2020/02/03/getting-started-cassandra-part-1/ Los Techies urn:uuid:f11f8200-727e-e8cb-60b2-06dc45a16751 Mon, 03 Feb 2020 20:23:00 +0000 This is a quick and dirty opinionated guide to setting up a Cassandra cluster with multiple data centers. <p>This is a quick and dirty opinionated guide to setting up a Cassandra cluster with multiple data centers.</p> <h2 id="a-new-cluster">A new cluster</h2> <ul> <li>In cassandra.yaml set <code class="language-plaintext highlighter-rouge">endpoint_snitch: GossipingPropertyFileSnitch</code>, some prefer PropertyFileSnitch for the ease of pushing out one file. GossipingPropertyFileSnitch is harder to get wrong in my experience.</li> <li>set dc in cassandra-rackdc.properties. Set to be whatever dc you want that node to be in. Ignore rack until you really need it, 8/10 people that use racks do it wrong the first time, and it’s slightly painful to unwind.</li> <li>finish adding all of your nodes.</li> <li>if using authentication, set <code class="language-plaintext highlighter-rouge">system_auth</code> keyspace to use NetworkTopologyStrategy in cqlsh with RF 3 (or == number of replicas if less than 3 per dc) for each datacenter you’ve created <code class="language-plaintext highlighter-rouge">ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'NetworkTopologyStrategy', 'data_center_name' : 3, 'data_center_name' : 3};</code>, run repair after changing RF</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr system_auth</code> on each node in the cluster on the new keyspace.</li> <li>create your new keyspaces for your app with RF 3 in each dc (much like you did for the <code class="language-plaintext highlighter-rouge">system_auth</code> step above).</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr whatever_new_keyspace</code> on each node in the cluster on the new keyspace.</li> </ul> <h2 id="an-existing-cluster">An existing cluster</h2> <p>This is harder and involves more work and more options, but I’m going to discuss the way that gets you into the least amount of trouble operationally.</p> <ul> <li>make sure <em>none</em> of the drivers you use to connect to cassnadra are using DowngradingConsistencyRetryPolicy, or using the maligned withUsedHostsPerRemoteDc, especially allowRemoteDCsForLocalConsistencyLevel, as this may cause your driver to send requests to the remote data center before it’s populated with data.</li> <li>switch <code class="language-plaintext highlighter-rouge">endpoint_snitch</code> on each node to GossipingPropertyFileSnitch</li> <li>set dc in cassandra-rackdc.properties. Set to be whatever dc you want that node to be in. Ignore rack until you really need it, 8/10 people that use racks do it wrong the first time, and it’s slightly painful to unwind.</li> <li>bootstrap each node in the new data center.</li> <li>if using authentication, set <code class="language-plaintext highlighter-rouge">system_auth</code> keyspace to use NetworkTopologyStrategy in cqlsh with RF 3 (or == number of replicas if less than 3 per dc) for each datacenter you’ve created <code class="language-plaintext highlighter-rouge">ALTER KEYSPACE system_auth WITH REPLICATION= {'class' : 'NetworkTopologyStrategy', 'data_center_name' : 3, 'data_center_name' : 3};</code>, run repair after changing RF</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr system_auth</code> on each node in the cluster on the new keyspace.</li> <li>alter your app keyspaces for your app with RF 3 in each dc (much like you did for the <code class="language-plaintext highlighter-rouge">system_auth</code> step above),</li> <li><code class="language-plaintext highlighter-rouge">nodetool repair -pr whatever_keyspace</code> on each node in the cluster on the new keyspace.</li> </ul> <p>enjoy new data center</p> <h3 id="how-to-get-data-to-new-dc">how to get data to new dc</h3> <h4 id="repair-approach">Repair approach</h4> <p>Best done with if your repair jobs can’t be missed or stopped, either because you have a process like opscenter or repear running repairs. It also has the advantage of being very easy, and if you’ve already automated repair you’re basically done.</p> <ul> <li>let repair jobs continue..that’s it!</li> </ul> <h4 id="rebuild-approach">Rebuild approach</h4> <p>Faster less resource intensive, and if you have enough time to complete it while repair is stopped. Rebuild is easier to ‘resume’ than repair in many ways, so this has a number of advantages.</p> <ul> <li>run <code class="language-plaintext highlighter-rouge">nodetool rebuild</code> on each node in the new dc only, if it dies for some reason, rerunning the command will resume the process.</li> <li>run <code class="language-plaintext highlighter-rouge">nodetool cleanup</code></li> </ul> <h4 id="yolo-rebuild-with-repair">YOLO rebuild with repair</h4> <p>This will probably overstream it’s share of data and honestly a lot of folks do this for some reason in practice:</p> <ul> <li>leave repair jobs running</li> <li>run <code class="language-plaintext highlighter-rouge">nodetool rebuild</code> on each node in the new dc only, if it dies for some reason, rerunning the command will resume the process.</li> <li>run <code class="language-plaintext highlighter-rouge">nodetool cleanup</code> on each node</li> </ul> <h2 id="cloud-strategies">Cloud strategies</h2> <p>There are a few valid approaches to this and none of them are wrong IMO.</p> <h3 id="region--dc-rack--az">region == DC, rack == AZ</h3> <p>Will need to get into racks and a lot of people get this wrong and imbalance the racks, but you get the advantage of more intelligent failure modes, with racks mapping to AZs.</p> <h3 id="azregardless-of-region--dc">AZ..regardless of region == DC</h3> <p>This allows things to be balanced easily, but you have no good option for racks then. However, some people think racks are overated, and I’d say a majority of clusters run with one rack.</p> Gocode Vim Plugin and Go Modules https://blog.jasonmeridth.com/posts/gocode-vim-plugin-and-go-modules/ Jason Meridth urn:uuid:c9be1149-395b-e365-707e-8fa2f475093c Sat, 05 Jan 2019 17:09:26 +0000 <p>I recently purchased <a href="https://lets-go.alexedwards.net/">Let’s Go</a> from Alex Edwards. I wanted an end-to-end Golang website tutorial. It has been great. Lots learned.</p> <p>Unfortunately, he is using Go’s modules and the version of the gocode Vim plugin I was using did not support Go modules.</p> <h3 id="solution">Solution:</h3> <p>Use <a href="https://github.com/stamblerre/gocode">this fork</a> of the gocode Vim plugin and you’ll get support for Go modules.</p> <p>I use <a href="https://github.com/junegunn/vim-plug">Vim Plug</a> for my Vim plugins. Huge fan of Vundle but I like the post-actions feature of Plug. I just had to change one line of my vimrc and re-run updates.</p> <div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gh">diff --git a/vimrc b/vimrc index 3e8edf1..8395705 100644 </span><span class="gd">--- a/vimrc </span><span class="gi">+++ b/vimrc </span><span class="gu">@@ -73,7 +73,7 @@ endif </span> let editor_name='nvim' Plug 'zchee/deoplete-go', { 'do': 'make'} endif <span class="gd">- Plug 'nsf/gocode', { 'rtp': 'vim', 'do': '~/.config/nvim/plugged/gocode/vim/symlink.sh' } </span><span class="gi">+ Plug 'stamblerre/gocode', { 'rtp': 'vim', 'do': '~/.vim/plugged/gocode/vim/symlink.sh' } </span> Plug 'godoctor/godoctor.vim', {'for': 'go'} " Gocode refactoring tool " } </code></pre></div></div> <p>That is the line I had to change then run <code class="highlighter-rouge">:PlugUpdate!</code> and the new plugin was installed.</p> <p>I figured all of this out due to <a href="https://github.com/zchee/deoplete-go/issues/134#issuecomment-435436305">this comment</a> by <a href="https://github.com/cippaciong">Tommaso Sardelli</a> on Github. Thank you Tommaso.</p> Raspberry Pi Kubernetes Cluster - Part 4 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4/ Jason Meridth urn:uuid:56f4fdcb-5310-bbaa-c7cf-d34ef7af7682 Fri, 28 Dec 2018 16:35:23 +0000 <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1">Raspberry Pi Kubenetes Cluster - Part 1</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2">Raspberry Pi Kubenetes Cluster - Part 2</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3">Raspberry Pi Kubenetes Cluster - Part 3</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4">Raspberry Pi Kubenetes Cluster - Part 4</a></p> <p>Howdy again.</p> <p>In this post I’m going to show you how to create a docker image to run on ARM architecture and also how to deploy it and view it.</p> <p>To start please view my basic flask application called fl8 <a href="https://github.com/meridth/fl8">here</a></p> <p>If you’d like to clone and use it:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git@github.com:meridth/fl8.git <span class="o">&amp;&amp;</span> <span class="nb">cd </span>fl8 </code></pre></div></div> <h1 id="arm-docker-image">ARM docker image</h1> <p>First we need to learn about QEMU</p> <h3 id="what-is-qemu-and-qemu-installation">What is QEMU and QEMU installation</h3> <p>QEMU (Quick EMUlator) is an Open-Source hosted hypervisor, i.e. an hypervisor running on a OS just as other computer programs, which performs hardware virtualization. QEMU emulates CPUs of several architectures, e.g. x86, PPC, ARM and SPARC. It allows the execution of non-native target executables emulating the native execution and, as we require in this case, the cross-building process.</p> <h3 id="base-docker-image-that-includes-qemu">Base Docker image that includes QEMU</h3> <p>Please open the <code class="highlighter-rouge">Dockerfile.arm</code> and notice the first line: <code class="highlighter-rouge">FROM hypriot/rpi-alpine</code>. This is a base image that includes the target qemu statically linked executable, <em>qemu-arm-static</em> in this case. I chose <code class="highlighter-rouge">hypriot/rpi-alpine</code> because the alpine base images are much smaller than other base images.</p> <h3 id="register-qemu-in-the-build-agent">Register QEMU in the build agent</h3> <p>To add QEMU in the build agent there is a specific Docker Image performing what we need, so just run in your command line:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--rm</span> <span class="nt">--privileged</span> multiarch/qemu-user-static:register <span class="nt">--reset</span> </code></pre></div></div> <h3 id="build-image">Build image</h3> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">-f</span> ./Dockerfile.arm <span class="nt">-t</span> meridth/rpi-fl8 <span class="nb">.</span> </code></pre></div></div> <p>And voila! You now have an image that will run on Raspberry Pis.</p> <h1 id="deployment-and-service">Deployment and Service</h1> <p><code class="highlighter-rouge">/.run-rpi.sh</code> is my script where I run a Kubernetes deployment with 3 replicas and a Kubernetes service. Please read <code class="highlighter-rouge">fl8-rpi-deployment.yml</code> and <code class="highlighter-rouge">fl8-rpi-service.yml</code>. They are only different from the other deployment and service files by labels. Labels are key/vaule pairs that can be used by selectors later.</p> <p>The deployment will pull my image from <code class="highlighter-rouge">meridth/rpi-fl8</code> on dockerhub. If you have uploaded your docker image somewhere you can change the deployment file to pull that image instead.</p> <h1 id="viewing-application">Viewing application</h1> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get pods </code></pre></div></div> <p>Choose a pod to create the port forwarding ssh tunnel.</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl port-forward <span class="o">[</span>pod-name] <span class="o">[</span>app-port]:[app-port] </code></pre></div></div> <p>Example: <code class="highlighter-rouge">kubectl port-forward rpi-fl8-5d84dd8ff6-d9tgz 5010:5010</code></p> <p>The final result when you go to <code class="highlighter-rouge">http://localhost:5010</code> in a browser.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/port_forward.png" alt="port forward result" /></p> <p>Hope this helps someone else. Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4/">Raspberry Pi Kubernetes Cluster - Part 4</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on December 28, 2018.</p> Raspberry Pi Kubernetes Cluster - Part 3 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3/ Jason Meridth urn:uuid:c12fa6c5-8e7a-6c5d-af84-3c0452cf4ae4 Mon, 24 Dec 2018 21:59:23 +0000 <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1">Raspberry Pi Kubenetes Cluster - Part 1</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2">Raspberry Pi Kubenetes Cluster - Part 2</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3">Raspberry Pi Kubenetes Cluster - Part 3</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4">Raspberry Pi Kubenetes Cluster - Part 4</a></p> <p>Well, it took me long enough to follow up on my previous posts. There are reasons.</p> <ol> <li>The day job has been fun and busy</li> <li>Family life has been fun and busy</li> <li>I kept hitting annoying errors when trying to get my cluster up and running</li> </ol> <p>The first two reasons are the usual reasons a person doesn’t blog. :)</p> <p>The last one is what prevented me from blogging sooner. I had mutliple issues when trying to use <a href="https://rak8s.io">rak8s</a> to setup my cluster. I’m a big fan of <a href="https://ansible.com">Ansible</a> and I do not like running scripts over and over. I did read <a href="https://gist.github.com/alexellis/fdbc90de7691a1b9edb545c17da2d975">K8S on Raspbian Lite</a> from top to bottom and realized automation would make this much better.</p> <!--more--> <h3 id="the-issues-i-experienced">The issues I experienced:</h3> <h4 id="apt-get-update-would-not-work">apt-get update would not work</h4> <p>I started with the vanilla Raspbian lite image to run on my nodes and had MANY MANY issues with running <code class="highlighter-rouge">apt-get update</code> and <code class="highlighter-rouge">apt-get upgrade</code>. The mirrors would disconnect often and just stall. This doesn’t help my attempted usage of rak8s which does both on the <code class="highlighter-rouge">cluster.yml</code> run (which I’ll talk about later).</p> <h4 id="rak8s-changes-needed-to-run-hypriotos-and-kubernetes-1131">rak8s changes needed to run HypriotOS and kubernetes 1.13.1</h4> <p>Clone the repo locally and I’ll walk you through what I changed to get <a href="https://rak8s.io">rak8s</a> working for me and HypriotOS.</p> <p>Change the following files:</p> <ul> <li><code class="highlighter-rouge">ansible.cfg</code> <ul> <li>change user from <code class="highlighter-rouge">pi</code> to <code class="highlighter-rouge">pirate</code></li> </ul> </li> <li><code class="highlighter-rouge">roles/kubeadm/tasks/main.yml</code> <ul> <li>add <code class="highlighter-rouge">ignore_errors: True</code> to <code class="highlighter-rouge">Disable Swap</code> task</li> <li>I have an open PR for this <a href="https://github.com/rak8s/rak8s/pull/46">here</a></li> </ul> </li> <li><code class="highlighter-rouge">group_vars/all.yml</code> <ul> <li>Change <code class="highlighter-rouge">kubernetes_package_version</code> to <code class="highlighter-rouge">"1.13.1-00"</code></li> <li>Change <code class="highlighter-rouge">kubernetes_version</code> to <code class="highlighter-rouge">"v1.13.1"</code></li> </ul> </li> </ul> <p>After you make those changes you can run <code class="highlighter-rouge">ansible-playbook cluster.yml</code> as the rak8s documentation suggests. Please note this is after you edit <code class="highlighter-rouge">inventory</code> and copy <code class="highlighter-rouge">ssh</code> keys to raspberry pis.</p> <h4 id="flannel-networking-issue-once-nodes-are-up">Flannel networking issue once nodes are up</h4> <p>After I get all of the nodes up I noticed the master node was marked ast <code class="highlighter-rouge">NotReady</code> and when I ran <code class="highlighter-rouge">kubectl describe node raks8000</code> I saw the following error:</p> <blockquote> <p>KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized</p> </blockquote> <p>This error is known in kubernetes &gt; 1.12 and flannel v0.10.0. It is mentioned in <a href="https://github.com/coreos/flannel/issues/1044">this issue</a>. The fix is specifically mentioned <a href="https://github.com/coreos/flannel/issues/1044#issuecomment-427247749">here</a>. It is to run the following command</p> <p><code class="highlighter-rouge">kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</code></p> <p>After readin the issue it seems the fix will be in the next version of flannel and will be backported to v0.10.0.</p> <h1 id="a-running-cluster">A running cluster</h1> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/running_cluster.png" alt="Running Cluster" /></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3/">Raspberry Pi Kubernetes Cluster - Part 3</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on December 24, 2018.</p> MVP how minimal https://lostechies.com/ryansvihla/2018/12/20/mvp-how-minimal/ Los Techies urn:uuid:3afadd9e-98a7-8d37-b797-5403312a2999 Thu, 20 Dec 2018 20:00:00 +0000 MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP: <p>MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP:</p> <ul> <li>Mega Minimal: website and db, mostly manual on the backend</li> <li>Mega Mega: provisioning system, dynamic tuning of systems via ML, automated operations, monitoring a few others I’m leaving out.</li> </ul> <h2 id="feedback">Feedback</h2> <p>If we’re evaluating which approach gives us more feedback, Mega Minimal MVP is gonna win hands down here. Some will counter they don’t want to give people a bad impression with a limited product and that’s fair, but it’s better than no impression (the dreaded never shipped MVP). The Mega Mega MVP I referenced took months to demo. only had one of those checkboxes setup and wasn’t ever demod again. So we can categorical say that failed at getting any feedback.</p> <p>Whereas the Mega Minimal MVP, got enough feedback and users for the founders to realize that wasn’t a business for them. Better than after hiring a huge team and sinking a million plus into dev efforts for sure. Not the happy ending I’m sure you all were expecting, but I view that as mission accomplished.</p> <h2 id="core-value">Core Value</h2> <ul> <li>Mega Minimal, they only focused on a single feature, executed well enough that people gave them some positive feedback, but not enough to justify automating everything.</li> <li>Mega Mega. I’m not sure anyone who talked about the product saw the same core value, and there were several rewrites and shifts along the way.</li> </ul> <p>Advantage Mega Minimal again</p> <h2 id="what-about-entrants-into-a-crowded-field">What about entrants into a crowded field</h2> <p>Well that is harder and the MVP tends to be less minimal, because the baseline expectations are just much higher. I still lean towards Mega Minimal having a better chance at getting users, since there is a non zero chance the Mega Mega MVP will never get finished. I still think the exercise in focusing on core value that makes your product <em>not</em> a me too, and even considering how you can find a niche in a crowded field instead of just being “better”, and your MVP can be that niche differentiator.</p> <h2 id="internal-users">Internal users</h2> <p>Sometimes a good middle ground is considering getting lots of internal users if you’re really worried about bad experiences. This has it’s it’s definite downsides however, and you may not get diverse enough opinions. But it does give you some feedback while saving some face or bad experiences. I often think of the example of EC2 that was heavily used by Amazon, before being released to the world. That was a luxury Amazon had, where their customer base and their user base happened to be very similar, and they had bigger scale needs than any of their early customers, so the early internal feedback loop was a very strong signal.</p> <h2 id="summary">Summary</h2> <p>In the end however you want to approach MVPs is up to you, and if you find success with a meatier MVP than I have please don’t let me push you away from what works. But if you are having trouble shipping and are getting pushed all the time to add one more feature to that MVP before releasing it, consider stepping back and asking is this really core value for the product? Do you already have your core value? if so, consider just releasing it.</p> Surprise Go is ok for me now https://lostechies.com/ryansvihla/2018/12/13/surprise-go-is-ok/ Los Techies urn:uuid:53abf2a3-23f2-5855-0e2d-81148fb908bf Thu, 13 Dec 2018 20:23:00 +0000 I’m surprised to say this, I am ok using Go now. It’s not my style but I am able to build most anything I want to with it, and the tooling around it continues to improve. <p>I’m surprised to say this, I am ok using Go now. It’s not my style but I am able to build most anything I want to with it, and the tooling around it continues to improve.</p> <p>About 7 months ago I wrote about all the things I didn’t really care for in Go and now I either no longer am so bothered by it or things have improved.</p> <p>Go Modules so far is a huge improvement over Dep and Glide for dependency management. It’s easy to setup, performant and eliminates the GOPATH silliness. I haven’t tried it yet with some of the goofier libraries that gave me problems in the past (k8s api for example) so the jury is out on that, but again pretty impressed. I now longer have to check in vendor to speed up builds. Lesson use Go Modules.</p> <p>I pretty much stopped using channels for everything but shutdown signals and that fits my preferences pretty well, I use mutex and semaphores for my multithreaded code and feel no guilt about it. This cut out a lot of pain for me, and with the excellent race detector I feel really comfortable writing multi-threaded in Go now. Lesson, don’t use channels much.</p> <p>Lack of generics still sometimes sucks but I usually implement some crappy casting with dynamic types if I need that. I’ve sorta made my piece with just writing more code, and am no longer so hung up. Lesson relax.</p> <p>Error handling I’m still struggling with, I thought about using one of the error Wrap() libraries but an official one is in draft spec now, so I’ll wait on that. I now tend to have less nesting of functions as a result, this probably means longer functions than I like, but my code looks more “normal” now. This is a trade off I’m ok with. Lesson relax more.</p> <p>I see the main virtue of Go now that it is very popular in the infrastructure space where I am and so it’s becoming the common tongue (largely replacing Python for those sorts of tasks). For this, honestly it’s about right. It’s easy to rip out command line tools and deploy binaries for every platform with no runtime install.</p> <p>The community’s conservative attitude I sort of view as a feature now, in that there isn’t a bunch of different options that are popular and there is no arguing over what file format is used. This drove me up the wall initially, but I appreciate how much less time I spend on these things now.</p> <p>So now I suspect Go will be my “last” programming language. It’s not the one I would have chosen, but where I am at in my career, where most of my dev work is automation and tooling it fits the bill pretty well.</p> <p>Also equally important most of the people working with me didn’t have full time careers as developers or spend their time reading “Domain Driven Design” (amazing book) so adding in a bunched of nuanced stuff that maybe technically optimal but assumes the reader grasps all that nuance isn’t a good tradeoff for me.</p> <p>So I think I sorta get it now. I’ll never be a cheerleader for the language but it definitely solves my problems well enough.</p> Collaboration vs. Critique https://lostechies.com/derekgreer/2018/05/18/collaboration-vs-critique/ Los Techies urn:uuid:8a2d0bfb-9efe-2fd2-1e9b-6ba6d06055da Fri, 18 May 2018 17:00:00 +0000 While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique. <p>While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique.</p> <p>To help illustrate what I’ve seen happen countless times both in catch-up design sessions and code reviews, consider the following two scenarios:</p> <h3 id="scenario-1">Scenario 1</h3> <p>Tom and Sally are both developers on a team maintaining a large-scale application. Tom takes the next task in the development queue which happens to have some complex processes that will need to be addressed. Being the good development team that they are, both Tom and Sally are aware of the requirements of the application (i.e. how the app needs to work from the user’s perspective), but they have deferred design-level discussions until the time of implementation. After Tom gets into the process a little, seeing that the problem is non-trivial, he pings Sally to help him brainstorm different approaches to solving the problem. Tom and Sally have been working together for over a year and have become accustomed to these sort of ad-hoc design sessions. As they begin discussing the problem, they each start tossing ideas out on the proverbial table resulting in multiple approaches to compare and contrast. The nature of the discussion is such that neither Tom nor Sally are embarrassed or offended when the other points out flaws in the design because there’s a sense of safety in their mutual understanding that this is a brainstorming session and that neither have thought in depth about the solutions being set froth yet. Tom throws out a couple of ideas, but ends up shooting them down himself as he uses Sally as a sounding board for the ideas. Sally does the same, but toward the end of the conversation suggests a slight alteration to one of Tom’s initial suggestions that they think may make it work after all. They end the session both with a sense that they’ve worked together to arrive at the best solution.</p> <h3 id="scenario-2">Scenario 2</h3> <p>Bill and Jake are developers on another team. They tend to work in a more siloed fashion, but they do rely upon one another for help from time to time and they are required to do code reviews prior to their code being merged into the main branch of development. Bill takes the next task in the development queue and spends the better part of an afternoon working out a solution with a basic working skeleton of the direction he’s going. The next day he decides that it might be good to have Jake take a look at the design to make him aware of the direction. Seeing where Bill’s design misses a few opportunities to make the implementation more adaptable to changes in the future, Jake points out where he would have done things differently. Bill acknowledges that Jake’s suggestions would be better and would have probably been just as easy to implement from the beginning, but inwardly he’s a bit disappointed that Jake didn’t like his design as-is and that he has to do some rework. In the end, Bill is left with a feeling of critique rather than collaboration.</p> <p>Whether it’s a high-level UML diagram or working code, how one person tends to perceive feedback on the ideas comprising a potential solution has everything to do with timing. It can be the exact same feedback they would have received either way, but when the feedback occurs often makes a difference between whether it’s perceived as collaboration or critique. It’s all about when the conversation happens.</p> Testing Button Click in React with Jest https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:a8e7d9fd-d718-a072-55aa-0736ac21bec4 Mon, 07 May 2018 17:01:59 +0000 When building React applications you will most likely find yourself using Jest as your testing framework.  Jest has some really, really cool features built in.  But when you use Enzyme you can take your testing to the nest level. One really cool feature is the ability to test click events via Enzyme to ensure your &#8230; <p><a href="https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Testing Button Click in React with&#160;Jest</span></a></p> <p>When building <a href="https://reactjs.org/" target="_blank" rel="noopener">React</a> applications you will most likely find yourself using <a href="https://facebook.github.io/jest" target="_blank" rel="noopener">Jest</a> as your testing framework.  Jest has some really, really cool features built in.  But when you use <a href="http://airbnb.io/enzyme/docs/guides/jest.html" target="_blank" rel="noopener">Enzyme</a> you can take your testing to the nest level.</p> <p>One really cool feature is the ability to test click events via Enzyme to ensure your code responds as expected.</p> <p>Before we get started you are going to want to make sure you have Jest and Enzyme installed in your application.</p> <ul> <li>Installing <a href="https://github.com/airbnb/enzyme/blob/master/docs/installation/README.md" target="_blank" rel="noopener">Enzyme</a></li> <li>Installing <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Jest</a></li> </ul> <p>Sample code under test</p> <p><img data-attachment-id="111" data-permalink="https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/screen-shot-2018-05-07-at-12-52-56-pm/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640" data-orig-size="580,80" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Screen Shot 2018-05-07 at 12.52.56 PM" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640?w=580" class="alignnone size-full wp-image-111" src="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640" alt="Screen Shot 2018-05-07 at 12.52.56 PM" srcset="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png 580w, https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=300 300w" sizes="(max-width: 580px) 100vw, 580px" /></p> <p>What I would like to be able to do is pull the button out of my component and test the <code>onClick</code> event handler.</p> <div class="code-snippet"> <pre class="code-content"> // Make sure you have your imports setup correctly import React from 'react'; import { shallow } from 'enzyme'; it('When active link clicked, will push correct filter message', () =&gt; { let passedFilterType = ''; const handleOnTotalsFilter = (filterType) =&gt; { passedFilterType = filterType; }; const accounts = {}; const wrapper = shallow(&lt;MyComponent accounts={accounts} filterHeader="" onTotalsFilter={handleOnTotalsFilter} /&gt;); const button = wrapper.find('#archived-button'); button.simulate('click'); expect(passedFilterType).toBe(TotalsFilterType.archived); }); </pre> </div> <p>Lets take a look at the test above</p> <ol> <li>First we are going to create a callback (click handler) to catch the bubbled up values.</li> <li>We use Enzyme to create our component <code>MyComponent</code></li> <li>We use the .find() on our wrapped component to find our &lt;Button /&gt; by id</li> <li>After we get our button we can call .simulate(&#8216;click&#8217;) which will act as a user clicking the button.</li> <li>We can assert that the expected value bubbles up.</li> </ol> <p>As you can see, simulating a click event of a rendered component is very straight forward, yet very powerful.</p> <p>Till next time,</p> Lessons from a year of Golang https://lostechies.com/ryansvihla/2018/05/07/lessons-from-a-year-of-go/ Los Techies urn:uuid:e37d6484-2864-cc2a-034c-cac3d89dede7 Mon, 07 May 2018 13:16:00 +0000 I’m hoping to share in a non-negative way help others avoid the pitfalls I ran into with my most recent work building infrastructure software on top of a Kubernetes using Go, it sounded like an awesome job at first but I ran into a lot of problems getting productive. <p>I’m hoping to share in a non-negative way help others avoid the pitfalls I ran into with my most recent work building infrastructure software on top of a Kubernetes using Go, it sounded like an awesome job at first but I ran into a lot of problems getting productive.</p> <p>This isn’t meant to evaluate if you should pick up Go or tell you what you should think of it, this is strictly meant to help people out that are new to the language but experienced in Java, Python, Ruby, C#, etc and have read some basic Go getting started guide.</p> <h2 id="dependency-management">Dependency management</h2> <p>This is probably the feature most frequently talked about by newcomers to Go and with some justification, as dependency management been a rapidly shifting area that’s nothing like what experienced Java, C#, Ruby or Python developers are used to.</p> <p>I’ll cut to the chase the default tool now is <a href="https://github.com/golang/dep">Dep</a> all other tools I’ve used such as <a href="https://github.com/Masterminds/glide">Glide</a> or <a href="https://github.com/tools/godep">Godep</a> they’re now deprecated in favor of Dep, and while Dep has advanced rapidly there are some problems you’ll eventually run into (or I did):</p> <ol> <li>Dep hangs randomly and is slow, which is supposedly network traffic <a href="https://github.com/golang/dep/blob/c8be449181dadcb01c9118a7c7b592693c82776f/docs/failure-modes.md#hangs">but it happens to everyone I know with tons of bandwidth</a>. Regardless, I’d like an option to supply a timeout and report an error.</li> <li>Versions and transitive depency conflicts can be a real breaking issue in Go still. So without shading or it’s equivalent two package depending on different versions of a given package can break your build, there are a number or proposals to fix this but we’re not there yet.</li> <li>Dep has some goofy ways it resolves transitive dependencies and you may have to add explicit references to them in your Gopkg.toml file. You can see an example <a href="https://kubernetes.io/blog/2018/01/introducing-client-go-version-6/">here</a> under <strong>Updating dependencies – golang/dep</strong>.</li> </ol> <h3 id="my-advice">My advice</h3> <ul> <li>Avoid hangs by checking in your dependencies directly into your source repository and just using the dependency tool (dep, godep, glide it doesn’t matter) for downloading dependencies.</li> <li>Minimize transitive dependencies by keeping stuff small and using patterns like microservices when your dependency tree conflicts.</li> </ul> <h2 id="gopath">GOPATH</h2> <p>Something that takes some adjustment is you check out all your source code in one directory with one path (by default ~/go/src ) and include the path to the source tree to where you check out. Example:</p> <ol> <li>I want to use a package I found on github called jim/awesomeness</li> <li>I have to go to ~/go/src and mkdir -p github.com/jim</li> <li>cd into that and clone the package.</li> <li>When I reference the package in my source file it’ll be literally importing github.com/jim/awesomeness</li> </ol> <p>A better guide to GOPATH and packages is <a href="https://thenewstack.io/understanding-golang-packages/">here</a>.</p> <h3 id="my-advice-1">My advice</h3> <p>Don’t fight it, it’s actually not so bad once you embrace it.</p> <h2 id="code-structure">Code structure</h2> <p>This is a hot topic and there are a few standards for the right way to structure you code from projects that do “file per class” to giant files with general concept names (think like types.go and net.go). Also if you’re used to using a lot of sub package you’re gonna to have issues with not being able to compile if for example you have two sub packages reference one another.</p> <h3 id="my-advice-2">My Advice</h3> <p>In the end I was reasonably ok with something like the following:</p> <ul> <li>myproject/bin for generated executables</li> <li>myproject/cmd for command line code</li> <li>myproject/pkg for code related to the package</li> </ul> <p>Now whatever you do is fine, this was just a common idiom I saw, but it wasn’t remotely all projects. I also had some luck with just jamming everything into the top level of the package and keeping packages small (and making new packages for common code that is used in several places in the code base). If I ever return to using Go for any reason I will probably just jam everything into the top level directory.</p> <h2 id="debugging">Debugging</h2> <p>No debugger! There are some projects attempting to add one but Rob Pike finds them a crutch.</p> <h3 id="my-advice-3">My Advice</h3> <p>Lots of unit tests and print statements.</p> <h2 id="no-generics">No generics</h2> <p>Sorta self explanatory and it causes you a lot of pain when you’re used to reaching for these.</p> <h3 id="my-advice-4">My advice</h3> <p>Look at the code generation support which uses pragmas, this is not exactly the same as having generics but if you have some code that has a lot of boiler plate without them this is a valid alternative. See this official <a href="https://blog.golang.org/generate">Go Blog post</a> for more details.</p> <p>If you don’t want to use generation you really only have reflection left as a valid tool, which comes with all of it’s lack of speed and type safety.</p> <h2 id="cross-compiling">Cross compiling</h2> <p>If you have certain features or dependencies you may find you cannot take advantage of one of Go’s better features cross compilation.</p> <p>I ran into this when using the confluent-go/kafka library which depends on the C librdkafka library. It basically meant I had to do all my development in a Linux VM because almost all our packages relied on this.</p> <h3 id="my-advice-5">My Advice</h3> <p>Avoid C dependencies at all costs.</p> <h2 id="error-handling">Error handling</h2> <p>Go error handling is not exception base but return based, and it’s got a lot of common idioms around it:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>myValue, err := doThing() if err != nil { return -1, fmt.Errorf(“unable to doThing %v”, err) } </code></pre></div></div> <p>Needless to say this can get very wordy when dealing with deeply nested exceptions or when you’re interacting a lot with external systems. It is definitely a mind shift if you’re used to the throwing exceptions wherever and have one single place to catch all exceptions where they’re handled appropriately.</p> <h3 id="my-advice-6">My Advice</h3> <p>I’ll be honest I never totally made my peace with this. I had good training from experienced opensource contributors to major Go projects, read all the right blog posts, definitely felt like I’d heard enough from the community on why the current state of Go error handling was great in their opinions, but the lack of stack traces was a deal breaker for me.</p> <p>On the positive side, I found Dave Cheney’s advice on error handling to be the most practical and he wrote <a href="https://github.com/pkg/errors">a package</a> containing a lot of that advice, we found it invaluable as it provided those stack traces we all missed but you had to remember to use it.</p> <h2 id="summary">Summary</h2> <p>A lot of people really love Go and are very productive with it, I just was never one of those people and that’s ok. However, I think the advice in this post is reasonably sound and uncontroversial. So, if you find yourself needing to write some code it Go, give this guide a quick perusal and you’ll waste a lot less time than I did getting productive in developing applications in Go.</p> Raspberry Pi Kubernetes Cluster - Part 2 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2/ Jason Meridth urn:uuid:0aef121f-48bd-476f-e09d-4ca0aa2ac602 Thu, 03 May 2018 02:13:07 +0000 <p>Howdy again.</p> <p>Alright, my 8 port switch showed up so I was able to connect my raspberry 3B+ boards to my home network. I plugged it in with 6 1ft CAT5 cables I had in my catch-all box that all of us nerds have. I’d highly suggest flexible CAT 6 cables instead if you can get them, like <a href="https://www.amazon.com/Cat-Ethernet-Cable-Black-Connectors/dp/B01IQWGKQ6">here</a>. I ordered them and they showed up before I finished this post, so I am using the CAT6 cables.</p> <!--more--> <p>The IP addresses they will receive initialy from my home router via DHCP can be determined with nmap. Lets imagine my subnet is 192.168.1.0/24.</p> <p>Once I got them on the network I did the following:</p> <script src="https://gist.github.com/64e7b08729ffe779f77a7bda0221c6e9.js"> </script> <h3 id="install-raspberrian-os-on-sd-cards">Install Raspberrian OS On SD Cards</h3> <p>You can get the Raspberry Pi Stretch Lite OS from <a href="https://www.raspberrypi.org/downloads/raspbian/">here</a>.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/raspberry_pi_stretch_lite.png" alt="Raspberry Pi Stretch Lite" /></p> <p>Then use the <a href="https://etcher.io/">Etcher</a> tool to install it to each of the 6 SD cards.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/etcher.gif" alt="Etcher" /></p> <h4 id="important">IMPORTANT</h4> <p>Before putting the cards into the Raspberry Pis you need to add a <code class="highlighter-rouge">ssh</code> folder to the root of the SD cards. This will allow you to ssh to each Raspberry Pi with default credentials (username: <code class="highlighter-rouge">pi</code> and password <code class="highlighter-rouge">raspberry</code>). Example: <code class="highlighter-rouge">ssh pi@raspberry_pi_ip</code> where <code class="highlighter-rouge">raspberry_ip</code> is obtained from the nmap command above.</p> <p>Next post will be setting up kubernetes. Thank you for reading.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2/">Raspberry Pi Kubernetes Cluster - Part 2</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on May 02, 2018.</p> Multi-Environment Deployments with React https://derikwhittaker.blog/2018/04/10/multi-environment-deployments-with-react/ Maintainer of Code, pusher of bits… urn:uuid:4c0ae985-09ac-6d2e-0429-addea1632ea3 Tue, 10 Apr 2018 12:54:17 +0000 If you are using Create-React-App to scaffold your react application there is built in support for changing environment variables based on the NODE_ENV values, this is done by using .env files.  In short this process works by having a .env, .env.production, .env.development set of files.  When you run/build your application CRA will set the NODE_ENV value &#8230; <p><a href="https://derikwhittaker.blog/2018/04/10/multi-environment-deployments-with-react/" class="more-link">Continue reading <span class="screen-reader-text">Multi-Environment Deployments with&#160;React</span></a></p> <p>If you are using <a href="https://github.com/facebook/create-react-app" target="_blank" rel="noopener">Create-React-App</a> to scaffold your react application there is <a href="https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-development-environment-variables-in-env" target="_blank" rel="noopener">built in support</a> for changing environment variables based on the NODE_ENV values, this is done by using .env files.  In short this process works by having a .env, .env.production, .env.development set of files.  When you run/build your application <a href="https://github.com/facebook/create-react-app" target="_blank" rel="noopener">CRA</a> will set the NODE_ENV value to either <code>development</code> or <code>production</code> and based on these values the correct .env file will be used.</p> <p>This works great, when you have a simple deploy setup. But many times in enterprise level applications you need support for more than just 2 environments, many times it is 3-4 environments.  Common logic would suggest that you can accomplish this via the built in mechanism by having additional .env files and changing the NODE_ENV value to the value you care about.  However, CRA does not support this with doing an <code>eject</code>, which will eject all the default conventions and leave it to you to configure your React application.  Maybe this is a good idea, but in my case ejecting was not something I wanted to do.</p> <p>Because I did not want to do an <code>eject</code> I needed to find another solution, and after a fair amount of searching I found a solution that seems to work for me and my needs and is about the amount of effort as I wanted <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" alt=" Raspberry Pi Kubernetes Cluster - Part 1 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1/ Jason Meridth urn:uuid:bd3470f6-97d5-5028-cf12-0751f90915c3 Sat, 07 Apr 2018 14:01:00 +0000 <p>Howdy</p> <p>This is going to be the first post about my setup of a Raspberry Pi Kubernetes Cluster. I saw a post by <a href="https://harthoover.com/kubernetes-1.9-on-a-raspberry-pi-cluster/">Hart Hoover</a> and it finally motivated me to purchase his “grocery list” and do this finally. I’ve been using <a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> for local Kubernetes testing but it doesn’t give you multi-host testing abilities. I’ve also been wanting to get deeper into my Raspberry Pi knowledge. Lots of learning and winning.</p> <p>The items I bought were:</p> <ul> <li>Six <a href="https://smile.amazon.com/dp/B07BFH96M3">Raspberry Pi 3 Model B+ Motherboards</a></li> <li>Six <a href="https://smile.amazon.com/gp/product/B010Q57T02/">SanDisk Ultra 32GB microSDHC UHS-I Card with Adapter, Grey/Red, Standard Packaging (SDSQUNC-032G-GN6MA)</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B011KLFERG/ref=oh_aui_detailpage_o02_s01?ie=UTF8&amp;psc=1">Sabrent 6-Pack 22AWG Premium 3ft Micro USB Cables High Speed USB 2.0 A Male to Micro B Sync and Charge Cables Black CB-UM63</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B01L0KN8OS/ref=oh_aui_detailpage_o02_s01?ie=UTF8&amp;psc=1">AmazonBasics 6-Port USB Wall Charger (60-Watt) - Black</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B01D9130QC/ref=oh_aui_detailpage_o02_s00?ie=UTF8&amp;psc=1">GeauxRobot Raspberry Pi 3 Model B 6-layer Dog Bone Stack Clear Case Box Enclosure also for Pi 2B B+ A+ B A</a></li> <li>One <a href="http://amzn.to/2gNzLzi">Black Box 8-Port Switch</a></li> </ul> <p>Here is the tweet when it all arrived:</p> <div class="jekyll-twitter-plugin"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">I blame <a href="https://twitter.com/hhoover?ref_src=twsrc%5Etfw">@hhoover</a> ;). I will be building my <a href="https://twitter.com/kubernetesio?ref_src=twsrc%5Etfw">@kubernetesio</a> cluster once the 6pi case shows up next Wednesday. The extra pi is to upgrade my <a href="https://twitter.com/RetroPieProject?ref_src=twsrc%5Etfw">@RetroPieProject</a>. Touch screen is an addition I want to try. Side project here I come. <a href="https://t.co/EebIKbsCeH">pic.twitter.com/EebIKbsCeH</a></p>&mdash; Jason Meridth (@jmeridth) <a href="https://twitter.com/jmeridth/status/980075584725422080?ref_src=twsrc%5Etfw">March 31, 2018</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p>I spent this morning finally putting it together.</p> <p>Here is me getting started on the “dogbone case” to hold all of the Raspberry Pis:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_2.jpg" alt="The layout" /></p> <p>The bottom and one layer above:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_3.jpg" alt="The bottom and one layer above" /></p> <p>And the rest:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_4.jpg" alt="3 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_11.jpg" alt="4 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_12.jpg" alt="5 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_13.jpg" alt="6 Layers and Finished" /></p> <p>Different angles completed:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_14.jpg" alt="Finished Angle 2" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_15.jpg" alt="Finished Angle 3" /></p> <p>And connect the power:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_16.jpg" alt="Power" /></p> <p>Next post will be on getting the 6 sandisk cards ready and putting them in and watching the Raspberry Pis boot up and get a green light. Stay tuned.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1/">Raspberry Pi Kubernetes Cluster - Part 1</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on April 07, 2018.</p> Building AWS Infrastructure with Terraform: S3 Bucket Creation https://derikwhittaker.blog/2018/04/06/building-aws-infrastructure-with-terraform-s3-bucket-creation/ Maintainer of Code, pusher of bits… urn:uuid:cb649524-d882-220f-c253-406a54762705 Fri, 06 Apr 2018 14:28:49 +0000 If you are going to be working with any cloud provider it is highly suggested that you script out the creation/maintenance of your infrastructure.  In the AWS word you can use the native CloudFormation solution, but honestly I find this painful and the docs very lacking.  Personally, I prefer Terraform by Hashicorp.  In my experience &#8230; <p><a href="https://derikwhittaker.blog/2018/04/06/building-aws-infrastructure-with-terraform-s3-bucket-creation/" class="more-link">Continue reading <span class="screen-reader-text">Building AWS Infrastructure with Terraform: S3 Bucket&#160;Creation</span></a></p> <p>If you are going to be working with any cloud provider it is highly suggested that you script out the creation/maintenance of your infrastructure.  In the AWS word you can use the native <a href="https://www.googleadservices.com/pagead/aclk?sa=L&amp;ai=DChcSEwjD-Lry6KXaAhUMuMAKHTB8AYwYABAAGgJpbQ&amp;ohost=www.google.com&amp;cid=CAESQeD2aF3IUBPQj5YF9K0xmz0FNtIhnq3PzYAHFV6dMZVIirR_psuXDSgkzxZ0jXoyWfpECufNNfbp7JzHQ73TTrQH&amp;sig=AOD64_1b_L781SLpKXqLTFFYIk5Zv3BcHA&amp;q=&amp;ved=0ahUKEwi1l7Hy6KXaAhWD24MKHQXSCQ0Q0QwIJw&amp;adurl=" target="_blank" rel="noopener">CloudFormation</a> solution, but honestly I find this painful and the docs very lacking.  Personally, I prefer <a href="https://www.terraform.io/" target="_blank" rel="noopener">Terraform</a> by <a href="https://www.hashicorp.com/" target="_blank" rel="noopener">Hashicorp</a>.  In my experience the simplicity and easy of use, not to mention the stellar documentation make this the product of choice.</p> <p>This is the initial post in what I hope to be a series of post about how to use Terraform to setup/build AWS Infrastructure.</p> <p>Terrform Documentation on S3 Creation -&gt; <a href="https://www.terraform.io/docs/providers/aws/d/s3_bucket.html" target="_blank" rel="noopener">Here</a></p> <p>In this post I will cover 2 things</p> <ol> <li>Basic bucket setup</li> <li>Bucket setup as Static website</li> </ol> <p>Setting up a basic bucket we can use the following</p> <div class="code-snippet"> <pre class="code-content">resource "aws_s3_bucket" "my-bucket" { bucket = "my-bucket" acl = "private" tags { Any_Tag_Name = "Tag value for tracking" } } </pre> </div> <p>When looking at the example above the only 2 values that are required are bucket and acl.</p> <p>I have added the use of Tags to show you can add custom tags to your bucket</p> <p>Another way to setup an S3 bucket is to act as a Static Web Host.   Setting this up takes a bit more configuration, but not a ton.</p> <div class="code-snippet"> <pre class="code-content">resource "aws_s3_bucket" "my-website-bucket" { bucket = "my-website-bucket" acl = "public-read" website { index_document = "index.html" error_document = "index.html" } policy = &lt;&lt;POLICY { "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-website-bucket/*" } ] } POLICY tags { Any_Tag_Name = "Tag value for tracking" } } </pre> </div> <p>The example above has 2 things that need to be pointed out.</p> <ol> <li>The website settings.  Make sure you setup the correct pages here for index/error</li> </ol> <p>The Policy settings.  Here I am using just basic policy.  You can of course setup any policy here you want/need.</p> <p>As you can see, setting up S3 buckets is very simple and straight forward.</p> <p><strong><em>*** Reminder: S3 bucket names MUST be globally unique ***</em></strong></p> <p>Till next time,</p> SSH - Too Many Authentication Failures https://blog.jasonmeridth.com/posts/ssh-too-many-authentication-failures/ Jason Meridth urn:uuid:d7fc1034-1798-d75e-1d61-84fac635dda4 Wed, 28 Mar 2018 05:00:00 +0000 <h1 id="problem">Problem</h1> <p>I started seeing this error recently and had brain farted on why.</p> <figure class="highlight"><pre><code class="language-bash" data-lang="bash">Received disconnect from 123.123.132.132: Too many authentication failures <span class="k">for </span>hostname</code></pre></figure> <p>After a bit of googling it came back to me. This is because I’ve loaded too many keys into my ssh-agent locally (<code class="highlighter-rouge">ssh-add</code>). Why did you do that? Well, because it is easier than specifying the <code class="highlighter-rouge">IdentityFile</code> on the cli when trying to connect. But there is a threshhold. This is set by the ssh host by the <code class="highlighter-rouge">MaxAuthTries</code> setting in <code class="highlighter-rouge">/etc/ssh/sshd_config</code>. The default is 6.</p> <h1 id="solution-1">Solution 1</h1> <p>Clean up the keys in your ssh-agent.</p> <p><code class="highlighter-rouge">ssh-add -l</code> lists all the keys you have in your ssh-agent <code class="highlighter-rouge">ssh-add -d key</code> deletes the key from your ssh-agent</p> <h1 id="solution-2">Solution 2</h1> <p>You can solve this on the command line like this:</p> <p><code class="highlighter-rouge">ssh -o IdentitiesOnly=yes -i ~/.ssh/example_rsa foo.example.com</code></p> <p>What is IdentitiesOnly? Explained in Solution 3 below.</p> <h1 id="solution-3-best">Solution 3 (best)</h1> <p>Specifiy, explicitly, which key goes to which host(s) in your <code class="highlighter-rouge">.ssh/config</code> file.</p> <p>You need to configure which key (“IdentityFile”) goes with which domain (or host). You also want to handle the case when the specified key doesn’t work, which would usually be because the public key isn’t in ~/.ssh/authorized_keys on the server. The default is for SSH to then try any other keys it has access to, which takes us back to too many attempts. Setting “IdentitiesOnly” to “yes” tells SSH to only try the specified key and, if that fails, fall through to password authentication (presuming the server allows it).</p> <p>Your ~/.ssh/config would look like:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host *.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/myhost Host secure.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/mysecurehost_rsa Host *.myotherhost.domain IdentitiesOnly yes IdentityFile ~/.ssh/myotherhost_rsa </code></pre></div></div> <p><code class="highlighter-rouge">Host</code> is the host the key can connect to <code class="highlighter-rouge">IdentitiesOnly</code> means only to try <em>this</em> specific key to connect, no others <code class="highlighter-rouge">IdentityFile</code> is the path to the key</p> <p>You can try multiple keys if needed</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host *.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/myhost_rsa IdentityFile ~/.ssh/myhost_dsa </code></pre></div></div> <p>Hope this helps someone else.</p> <p>Cheers!</p> <p><a href="https://blog.jasonmeridth.com/posts/ssh-too-many-authentication-failures/">SSH - Too Many Authentication Failures</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 28, 2018.</p> Clear DNS Cache In Chrome https://blog.jasonmeridth.com/posts/clear-dns-cache-in-chrome/ Jason Meridth urn:uuid:6a2c8c0b-c91b-5f7d-dbc7-8065f0a2f1fd Tue, 27 Mar 2018 20:42:00 +0000 <p>I’m blogging this because I keep forgetting how to do it. Yeah, yeah, I can google it. I run this blog so I know it is always available…..anywho.</p> <p>Go To:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chrome://net-internals/#dns </code></pre></div></div> <p>Click “Clear host cache” button</p> <p><img src="https://blog.jasonmeridth.com/images/clear_dns_cache_in_chrome.png" alt="clear_dns_cache_in_chrome" /></p> <p>Hope this helps someone else.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/clear-dns-cache-in-chrome/">Clear DNS Cache In Chrome</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 27, 2018.</p> Create Docker Container from Errored Container https://blog.jasonmeridth.com/posts/create-docker-container-from-errored-container/ Jason Meridth urn:uuid:33d5a6b5-4c48-ae06-deb6-a505edc6b427 Mon, 26 Mar 2018 03:31:00 +0000 <p>When I’m trying to “dockerize” an applciation I usually have to work through some wonkiness.</p> <p>To diagnose a container that has errored out, I, obviously, look at the logs via <code class="highlighter-rouge">docker logs -f [container_name]</code>. That is sometimes helpful. It will, at minimum tell me where I need to focus on the new container I’m going to create.</p> <p><img src="https://blog.jasonmeridth.com/images/diagnose.jpg" alt="diagnose" /></p> <p>Pre-requisites to being able to build a diagnosis container:</p> <ul> <li>You need to use <code class="highlighter-rouge">CMD</code>, <em>not</em> <code class="highlighter-rouge">ENTRYPOINT</code> in the Dockerfile <ul> <li>with <code class="highlighter-rouge">CMD</code> you’ll be able to start a shell, with <code class="highlighter-rouge">ENTRYPOINT</code> your diagnosis container will just keep trying to run that</li> </ul> </li> </ul> <p>To create a diagnosis container, do the following:</p> <ul> <li>Check your failed container ID by <code class="highlighter-rouge">docker ps -a</code></li> <li>Create docker image form the container with <code class="highlighter-rouge">docker commit</code> <ul> <li>example: <code class="highlighter-rouge">docker commit -m "diagnosis" [failed container id]</code></li> </ul> </li> <li>Check the newly create docker image ID by <code class="highlighter-rouge">docker images</code></li> <li><code class="highlighter-rouge">docker run -it [new container image id] sh</code> <ul> <li>this takes you into a container immediately after the error occurred.</li> </ul> </li> </ul> <p>Hope this helps someone else.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/create-docker-container-from-errored-container/">Create Docker Container from Errored Container</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 25, 2018.</p> Log Early, Log Often… Saved my butt today https://derikwhittaker.blog/2018/03/21/log-early-log-often-saved-my-butt-today/ Maintainer of Code, pusher of bits… urn:uuid:395d9800-e7ce-27fd-3fc1-5e68628bc161 Wed, 21 Mar 2018 13:16:03 +0000 In a prior posting (AWS Lambda:Log Early Log often, Log EVERYTHING) I wrote about the virtues and value about having really in depth logging, especially when working with cloud services.  Well today this logging saved my ASS a ton of detective work. Little Background I have a background job (Lambda that is called on a schedule) &#8230; <p><a href="https://derikwhittaker.blog/2018/03/21/log-early-log-often-saved-my-butt-today/" class="more-link">Continue reading <span class="screen-reader-text">Log Early, Log Often&#8230; Saved my butt&#160;today</span></a></p> <p>In a prior <a href="https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/" target="_blank" rel="noopener">posting (AWS Lambda:Log Early Log often, Log EVERYTHING)</a> I wrote about the virtues and value about having really in depth logging, especially when working with cloud services.  Well today this logging saved my ASS a ton of detective work.</p> <p><strong>Little Background</strong><br /> I have a background job (Lambda that is called on a schedule) to create/update data cache in a <a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener">DynamoDB</a> table.  Basically this job will pull data from one data source and attempt to push it as create/update/delete to our Dynamo table.</p> <p>Today when I was running our application I noticed things were not loading right, in fact I had javascript errors because of null reference errors.  I knew that the issue had to be in our data, but was not sure what was wrong.  If I had not had a ton of logging (debug and info) I would have had to run our code locally and step though/debug code for hundreds of items of data.</p> <p>However, because of in depth logging I was able to quickly go to <a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener">CloudWatch</a> and filter on a few key words and narrow hundreds/thousands of log entries down to 5.  Once I had these 5 entries I was able to expand a few of those entries and found the error within seconds.</p> <p>Total time to find the error was less than 5 minutes and I never opened a code editor or stepped into code.</p> <p>The moral of this story, because I log everything, including data (no PII of course) I was able to quickly find the source of the error.  Now to fix the code&#8230;.</p> <p>Till next time,</p> AWS Lambda: Log early, Log often, Log EVERYTHING https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/ Maintainer of Code, pusher of bits… urn:uuid:6ee7f59b-7f4c-1312-bfff-3f9c46ec8701 Tue, 06 Mar 2018 14:00:58 +0000 In the world of building client/server applications logs are important.  They are helpful when trying to see what is going on in your application.  I have always held the belief  that your logs need to be detailed enough to allow you to determine the WHAT and WHERE without even looking at the code. But lets &#8230; <p><a href="https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/" class="more-link">Continue reading <span class="screen-reader-text">AWS Lambda: Log early, Log often, Log&#160;EVERYTHING</span></a></p> <p>In the world of building client/server applications logs are important.  They are helpful when trying to see what is going on in your application.  I have always held the belief  that your logs need to be detailed enough to allow you to determine the WHAT and WHERE without even looking at the code.</p> <p>But lets be honest, in most cases when building client/server applications logs are an afterthought.  Often this is because you can pretty easily (in most cases) debug your application and step through the code.</p> <p>When building a <a href="https://aws.amazon.com/serverless/" target="_blank" rel="noopener">serverless</a> applications with technologies like <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener">AWS Lambda</a> functions (holds true for Azure Functions as well) your logging game really needs to step up.</p> <p>The reason for this is that you cannot really debug your Lambda in the wild (you can to some degree locally with AWS SAM or the Serverless framework).  Because of this you need produce detailed enough logs to allow you to easily determine the WHAT and WHERE.</p> <p>When I build my serverless functions I have a few guidelines I follow</p> <ol> <li>Info Log calls to methods, output argument data (make sure no <a href="https://en.wikipedia.org/wiki/Personally_identifiable_information" target="_blank" rel="noopener">PII</a>/<a href="https://en.wikipedia.org/wiki/Protected_health_information" target="_blank" rel="noopener">PHI</a>)</li> <li>Error Log any failures (in try/catch or .catch for promises)</li> <li>Debug Log any critical decision points</li> <li>Info Log exit calls at top level methods</li> </ol> <p>I also like to setup a simple and consistent format for my logs.  The example I follow for my Lambda logs is as seen below</p> <div class="code-snippet"> <pre class="code-content">timestamp: [logLevel] : [Class.Method] - message {data points} </pre> </div> <p>I have found that if I follow these general guidelines the pain of determine failure points in serverless environments is heavily reduced.</p> <p>Till next time,</p> Sinon Error: Attempted to wrap undefined property ‘XYZ as function https://derikwhittaker.blog/2018/02/27/sinon-error-attempted-to-wrap-undefined-property-xyz-as-function/ Maintainer of Code, pusher of bits… urn:uuid:b41dbd54-3804-6f6d-23dc-d2a04635033a Tue, 27 Feb 2018 13:45:29 +0000 I ran into a fun little error recently when working on a ReactJs application.  In my application I was using SinonJs to setup some spies on a method, I wanted to capture the input arguments for verification.  However, when I ran my test I received the following error. Attempted to wrap undefined property handlOnAccountFilter as &#8230; <p><a href="https://derikwhittaker.blog/2018/02/27/sinon-error-attempted-to-wrap-undefined-property-xyz-as-function/" class="more-link">Continue reading <span class="screen-reader-text">Sinon Error: Attempted to wrap undefined property &#8216;XYZ as&#160;function</span></a></p> <p>I ran into a fun little error recently when working on a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> application.  In my application I was using <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a> to setup some spies on a method, I wanted to capture the input arguments for verification.  However, when I ran my test I received the following error.</p> <blockquote><p>Attempted to wrap undefined property handlOnAccountFilter as function</p></blockquote> <p>My method under test is setup as such</p> <div class="code-snippet"> <pre class="code-content">handleOnAccountFilter = (filterModel) =&gt; { // logic here } </pre> </div> <p>I was using the above syntax is the <a href="https://github.com/jeffmo/es-class-public-fields" target="_blank" rel="noopener">proposed class property</a> feature, which will automatically bind the <code>this</code> context of the class to my method.</p> <p>My sinon spy is setup as such</p> <div class="code-snippet"> <pre class="code-content">let handleOnAccountFilterSpy = null; beforeEach(() =&gt; { handleOnAccountFilterSpy = sinon.spy(AccountsListingPage.prototype, 'handleOnAccountFilter'); }); afterEach(() =&gt; { handleOnAccountFilterSpy.restore(); }); </pre> </div> <p>Everything looked right, but I was still getting this error.  It turns out that this error is due in part in the way that the Class Property feature implements the handlOnAccountFilter.  When you use this feature the method/property is added to the class as an instance method/property, not as a prototype method/property.  This means that sinon is not able to gain access to it prior to creating an instance of the class.</p> <p>To solve my issue I had to make a change in the implementation to the following</p> <div class="code-snippet"> <pre class="code-content">handleOnAccountFilter(filterModel) { // logic here } </pre> </div> <p>After make the above change I needed to determine how I wanted to bind <code>this</code> to my method (Cory show 5 ways to do this <a href="https://medium.freecodecamp.org/react-binding-patterns-5-approaches-for-handling-this-92c651b5af56" target="_blank" rel="noopener">here</a>).  I chose to bind <code>this</code> inside the constructor as below</p> <div class="code-snippet"> <pre class="code-content">constructor(props){ super(props); this.handleOnAccountFilter = this.handleOnAccountFilter.bind(this); } </pre> </div> <p>I am not a huge fan of having to do this (pun intended), but oh well.  This solved my issues.</p> <p>Till next time</p> Ensuring componentDidMount is not called in Unit Tests https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/ Maintainer of Code, pusher of bits… urn:uuid:da94c1a3-2de4-a90c-97f5-d7361397a33c Thu, 22 Feb 2018 19:45:53 +0000 If you are building a ReactJs you will often times implement componentDidMount on your components.  This is very handy at runtime, but can pose an issue for unit tests. If you are building tests for your React app you are very likely using enzyme to create instances of your component.  The issue is that when enzyme creates &#8230; <p><a href="https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/" class="more-link">Continue reading <span class="screen-reader-text">Ensuring componentDidMount is not called in Unit&#160;Tests</span></a></p> <p>If you are building a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> you will often times implement <code>componentDidMount</code> on your components.  This is very handy at runtime, but can pose an issue for unit tests.</p> <p>If you are building tests for your React app you are very likely using <a href="http://airbnb.io/projects/enzyme/" target="_blank" rel="noopener">enzyme</a> to create instances of your component.  The issue is that when enzyme creates the component it invokes the lifecyle methods, like <code>componentDidMount</code>.  Sometimes we do not want this to be called, but how to suppress this?</p> <p>I have found 2 different ways to suppress/mock <code>componentDidMount</code>.</p> <p>Method one is to redefine <code>componentDidMount</code> on your component for your tests.  This could have interesting side effects so use with caution.</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { beforeAll(() =&gt; { YourComponent.prototype.componentDidMount = () =&gt; { // can omit or add custom logic }; }); }); </pre> </div> <p>Basically above I am just redefining the componentDidMount method on my component.  This works and allows you to have custom logic.  Be aware that when doing above you will have changed the implementation for your component for the lifetime of your test session.</p> <p>Another solution is to use a mocking framework like <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a>.  With Sinon you can stub out the <code>componentDidMount</code> implementation as seen below</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { let componentDidMountStub = null; beforeAll(() =&gt; { componentDidMountStub = sinon.stub(YourComponent.prototype, 'componentDidMount').callsFake(function() { // can omit or add custom logic }); }); afterAll(() =&gt; { componentDidMountStub.restore(); }); }); </pre> </div> <p>Above I am using .stub to redefine the method.  I also added .<a href="http://sinonjs.org/releases/v4.3.0/stubs/" target="_blank" rel="noopener">callsFake</a>() but this can be omitted if you just want to ignore the call.  You will want to make sure you restore your stub via the afterAll, otherwise you will have stubbed out the call for the lifetime of your test session.</p> <p>Till next time,</p> Using Manual Mocks to test the AWS SDK with Jest https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:3a424860-3707-7327-2bb1-a60b9f3be47d Tue, 20 Feb 2018 13:56:45 +0000 Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests &#8230; <p><a href="https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Using Manual Mocks to test the AWS SDK with&#160;Jest</span></a></p> <p>Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests not unit tests.</p> <p>If you are using <a href="http://bit.ly/jest-get-started" target="_blank" rel="noopener">Jest</a>, one solution is utilize the built in support for <a href="http://bit.ly/jest-manual-mocks" target="_blank" rel="noopener">manual mocks.</a>  I have found the usage of manual mocks invaluable while testing 3rd party API&#8217;s such as the AWS.  Keep in mind just because I am using manual mocks this will remove the need for using libraries like <a href="http://bit.ly/sinon-js" target="_blank" rel="noopener">SinonJs</a> (a JavaScript framework for creating stubs/mocks/spies).</p> <p>The way that manual mocks work in Jest is as follows (from the Jest website&#8217;s documentation).</p> <blockquote><p><em>Manual mocks are defined by writing a module in a <code>__mocks__/</code> subdirectory immediately adjacent to the module. For example, to mock a module called <code>user</code> in the <code>models</code> directory, create a file called <code>user.js</code> and put it in the <code>models/__mocks__</code> directory. Note that the <code>__mocks__</code> folder is case-sensitive, so naming the directory <code>__MOCKS__</code> will break on some systems. If the module you are mocking is a node module (eg: <code>fs</code>), the mock should be placed in the <code>__mocks__</code> directory adjacent to <code>node_modules</code> (unless you configured <a href="https://facebook.github.io/jest/docs/en/configuration.html#roots-array-string"><code>roots</code></a> to point to a folder other than the project root).</em></p></blockquote> <p>In my case I want to mock out the usage of the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">AWS-SDK</a> for <a href="http://bit.ly/aws-sdk-node" target="_blank" rel="noopener">Node</a>.</p> <p>To do this I created a __mocks__ folder at the root of my solution.  I then created a <a href="http://bit.ly/gist-aws-sdk-js" target="_blank" rel="noopener">aws-sdk.js</a> file inside this folder.</p> <p>Now that I have my mocks folder created with a aws-sdk.js file I am able to consume my manual mock in my jest test by simply referencing the aws-sdk via a <code>require('aws-sdk')</code> command.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('./aws-sdk'); </pre> </div> <p>With declaration of AWS above my code is able to a use the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">NPM </a>package during normal usage, or my aws-sdk.js mock when running under the Jest context.</p> <p>Below is a small sample of the code I have inside my aws-sdk.js file for my manual mock.</p> <div class="code-snippet"> <pre class="code-content">const stubs = require('./aws-stubs'); const AWS = {}; // This here is to allow/prevent runtime errors if you are using // AWS.config to do some runtime configuration of the library. // If you do not need any runtime configuration you can omit this. AWS.config = { setPromisesDependency: (arg) =&gt; {} }; AWS.S3 = function() { } // Because I care about using the S3 service's which are part of the SDK // I need to setup the correct identifier. // AWS.S3.prototype = { ...AWS.S3.prototype, // Stub for the listObjectsV2 method in the sdk listObjectsV2(params){ const stubPromise = new Promise((resolve, reject) =&gt; { // pulling in stub data from an external file to remove the noise // from this file. See the top line for how to pull this in resolve(stubs.listObjects); }); return { promise: () =&gt; { return stubPromise; } } } }; // Export my AWS function so it can be referenced via requires module.exports = AWS; </pre> </div> <p>A few things to point out in the code above.</p> <ol> <li>I chose to use the <a href="http://bit.ly/sdk-javascript-promises" target="_blank" rel="noopener">promise</a>s implementation of the listObjectsV2.  Because of this I need to return a promise method as my result on my listObjectsV2 function.  I am sure there are other ways to accomplish this, but this worked and is pretty easy.</li> <li>My function is returning stub data, but this data is described in a separate file called aws-stubs.js which sites along side of my aws-sdk.js file.  I went this route to remove the noise of having the stub data inside my aws-adk file.  You can see a full example of this <a href="http://bit.ly/gist-aws-stub-data" target="_blank" rel="noopener">here</a>.</li> </ol> <p>Now that I have everything setup my tests will no longer attempt to hit the actually aws-sdk, but when running in non-test mode they will.</p> <p>Till next time,</p> Configure Visual Studio Code to debug Jest Tests https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/ Maintainer of Code, pusher of bits… urn:uuid:31928626-b984-35f6-bf96-5bfb71e16208 Fri, 16 Feb 2018 21:33:03 +0000 If you have not given Visual Studio Code a spin you really should, especially if  you are doing web/javascript/Node development. One super awesome feature of VS Code is the ability to easily configure the ability to debug your Jest (should work just fine with other JavaScript testing frameworks) tests.  I have found that most of &#8230; <p><a href="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/" class="more-link">Continue reading <span class="screen-reader-text">Configure Visual Studio Code to debug Jest&#160;Tests</span></a></p> <p>If you have not given <a href="https://code.visualstudio.com/" target="_blank" rel="noopener">Visual Studio Code</a> a spin you really should, especially if  you are doing web/javascript/Node development.</p> <p>One super awesome feature of VS Code is the ability to easily configure the ability to debug your <a href="https://facebook.github.io/jest/" target="_blank" rel="noopener">Jest </a>(should work just fine with other JavaScript testing frameworks) tests.  I have found that most of the time I do not need to actually step into the debugger when writing tests, but there are times that using <code>console.log</code> is just too much friction and I want to step into the debugger.</p> <p>So how do we configure VS Code?</p> <p>First you  will need to install the <a href="https://www.npmjs.com/package/jest-cli" target="_blank" rel="noopener">Jest-Cli</a> NPM package (I am assuming you already have Jest setup to run your tests, if you do not please read the <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Getting-Started</a> docs).  If you fail to do this step you will get the following error in Code when you try to run the debugger.</p> <p><img data-attachment-id="78" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jestcli/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" data-orig-size="702,75" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestCLI" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=640" class="alignnone size-full wp-image-78" src="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" alt="JestCLI" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640 640w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=300 300w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png 702w" sizes="(max-width: 640px) 100vw, 640px" /></p> <p>After you have Jest-Cli installed you will need to configure VS Code for debugging.  To do this open up the configuration by clicking Debug -&gt; Open Configurations.  This will open up a file called launch.json.</p> <p>Once launch.json is open add the following configuration</p> <div class="code-snippet"> <pre class="code-content"> { "name": "Jest Tests", "type": "node", "request": "launch", "program": "${workspaceRoot}/node_modules/jest-cli/bin/jest.js", "stopOnEntry": false, "args": ["--runInBand"], "cwd": "${workspaceRoot}", "preLaunchTask": null, "runtimeExecutable": null, "runtimeArgs": [ "--nolazy" ], "env": { "NODE_ENV": "development" }, "console": "internalConsole", "sourceMaps": false, "outFiles": [] } </pre> </div> <p>Here is a gist of a working <a href="https://gist.github.com/derikwhittaker/331d4a5befddf7fc6b2599f1ada5d866" target="_blank" rel="noopener">launch.json</a> file.</p> <p>After you save the file you are almost ready to start your debugging.</p> <p>Before you can debug you will want to open the debug menu (the bug icon on the left toolbar).   This will show a drop down menu with different configurations.  Make sure &#8216;Jest Test&#8217; is selected.</p> <p><img data-attachment-id="79" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jesttest/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" data-orig-size="240,65" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestTest" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" class="alignnone size-full wp-image-79" src="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" alt="JestTest" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png 240w, https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=150 150w" sizes="(max-width: 240px) 100vw, 240px" /></p> <p>If you have this setup correctly you should be able to set breakpoints and hit F5.</p> <p>Till next time,</p> Going Async with Node AWS SDK with Express https://derikwhittaker.blog/2018/02/13/going-async-with-node-aws-sdk-with-express/ Maintainer of Code, pusher of bits… urn:uuid:d4750cda-8c6e-8b2f-577b-78c746ee6ebd Tue, 13 Feb 2018 13:00:30 +0000 When building applications in Node/Express you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The 'old school' way was to use call backs, which often led to callback hell.  Than came along Promises which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for async/await was introduced and this really has cleaned up and enhanced the readability of your code. <p>When building applications in <a href="https://nodejs.org/en/" target="_blank" rel="noopener">Node</a>/<a href="http://expressjs.com/" target="_blank" rel="noopener">Express </a>you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The &#8216;old school&#8217; way was to use call backs, which often led to <a href="http://callbackhell.com/" target="_blank" rel="noopener">callback hell</a>.  Than came along <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise">Promises</a> which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function" target="_blank" rel="noopener">async</a>/<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await" target="_blank" rel="noopener">await</a> was introduced and this really has cleaned up and enhanced the readability of your code.</p> <p>Having the ability to use async/await is great, and is supported out of the box w/ Express.  But what do you do when you using a library which still wants to use promises or callbacks? The case in point for this article is <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">AWS Node SDK</a>.</p> <p>By default if you read through the AWS SDK documentation the examples lead you to believe that you need to use callbacks when implementing the SDK.  Well this can really lead to some nasty code in the world of Node/Express.  However, as of <a href="https://aws.amazon.com/blogs/developer/support-for-promises-in-the-sdk/" target="_blank" rel="noopener">v2.3.0</a> of the AWS SDK there is support for Promises.  This is much cleaner than using callbacks, but still poses a bit of an issue if you want to use async/await in your Express routes.</p> <p>However, with a bit of work you can get your promise based AWS calls to play nicely with your async/await based Express routes.  Lets take a look at how we can accomplish this.</p> <p>Before you get started I am going to make a few assumptions.</p> <ol> <li>You already have a Node/Express application setup</li> <li>You already have the AWS SDK for Node installed, if not read <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">here</a></li> </ol> <p>The first thing we are going to need to do is add reference to our AWS SDK and configure it to use promises.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('aws-sdk'); AWS.config.setPromisesDependency(null); </pre> </div> <p>After we have our SDK configured we can implement our route handler.  In my example here I am placing all the logic inside my handler.  In a real code base I would suggest better deconstruction of this code into smaller parts.</p> <div class="code-snippet"> <pre class="code-content">const express = require('express'); const router = express.Router(); const s3 = new AWS.S3(); router.get('/myRoute', async (req, res) =&gt; { const controller = new sitesController(); const params = req.params; const params = { Bucket: "bucket_name_here" }; let results = {}; var listPromise = s3.listObjects(params).promise(); listPromise.then((data) =&gt; { results = data; }); await Promise.all([listPromise]); res.json({data: results }) }) module.exports = router; </pre> </div> <p>Lets review the code above and call out a few important items.</p> <p>The first thing to notice is the addition of the <code>async</code> keyword in my route handler.  This is what allows us to use async/await in Node/Express.</p> <p>The next thing to look at is how I am calling the s3.listObjects.  Notice I am <strong>NOT </strong>providing a callback to the method, but instead I am chaining with .promise().  This is what instructs the SDK to use promises vs callbacks.  Once I have my callback I chain a &#8216;then&#8217; in order to handle my response.</p> <p>The last thing to pay attention to is the line with <code>await Promise.All([listPromise]);</code> This is the magic forces our route handler to not return prior to the resolution of all of our Promises.  Without this your call would exit prior to the listObjects call completing.</p> <p>Finally, we are simply returning our data from the listObjects call via <code>res.json</code> call.</p> <p>That&#8217;s it, pretty straight forward, once you learn that the AWS SDK supports something other than callbacks.</p> <p>Till next time,</p> Unable To Access Mysql With Root and No Password After New Install On Ubuntu https://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/ Jason Meridth urn:uuid:f81a51eb-8405-7add-bddb-f805b183347e Wed, 31 Jan 2018 00:13:00 +0000 <p>This bit me in the rear end again today. Had to reinstall mysql-server-5.7 for other reasons.</p> <p>You just installed <code class="highlighter-rouge">mysql-server</code> locally for your development environment on a recent version of Ubuntu (I have 17.10 artful installed). You did it with a blank password for <code class="highlighter-rouge">root</code> user. You type <code class="highlighter-rouge">mysql -u root</code> and you see <code class="highlighter-rouge">Access denied for user 'root'@'localhost'</code>.</p> <p><img src="https://blog.jasonmeridth.com/images/wat.png" alt="wat" /></p> <p>Issue: Because you chose to not have a password for the <code class="highlighter-rouge">root</code> user, the <code class="highlighter-rouge">auth_plugin</code> for my MySQL defaulted to <code class="highlighter-rouge">auth_socket</code>. That means if you type <code class="highlighter-rouge">sudo mysql -u root</code> you will get in. If you don’t, then this is NOT the fix for you.</p> <p>Solution: Change the <code class="highlighter-rouge">auth_plugin</code> to <code class="highlighter-rouge">mysql_native_password</code> so that you can use the root user in the database.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo mysql -u root mysql&gt; USE mysql; mysql&gt; UPDATE user SET plugin='mysql_native_password' WHERE User='root'; mysql&gt; FLUSH PRIVILEGES; mysql&gt; exit; $ sudo systemctl restart mysql $ sudo systemctl status mysql </code></pre></div></div> <p><strong>NB</strong> ALWAYS set a password for mysql-server in staging/production.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/">Unable To Access Mysql With Root and No Password After New Install On Ubuntu</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on January 30, 2018.</p> New Job https://blog.jasonmeridth.com/posts/new-job/ Jason Meridth urn:uuid:102e69a7-2b63-e750-2fa5-f46372d4d7c1 Mon, 08 Jan 2018 18:13:00 +0000 <p>Well, it is a new year and I’ve started a new job. I am now a Senior Software Engineer at <a href="https://truelinkfinancial.com">True Link Financial</a>.</p> <p><img src="https://blog.jasonmeridth.com/images/tllogo.png" alt="true link financial logo" /></p> <p>After interviewing with the co-founders Kai and Claire and their team, I knew I wanted to work here.</p> <p><strong>TL;DR</strong>: True Link: We give elderly and disable (really, anyone) back their financial freedom where they may not usually have it.</p> <p>Longer Version: Imagine you have an elderly family member who may start showing signs of dimensia. You can give them a True Link card and administer their card. You link it to their bank account or another source of funding and you can set limitations on when, where and how the card can be used. The family member feels freedom by not having to continually ask for money but is also protected by scammers and non-friendly people (yep, they exist).</p> <p>The customer service team, the marketing team, the product team, the engineering team and everyone else at True Link are amazing.</p> <p>For any nerd readers, the tech stack is currently Rails, React, AWS, Ansible. We’ll be introducing Docker and Kubernetes soon hopefully, but always ensuring the right tools for the right job.</p> <p>Looking forward to 2018.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/new-job/">New Job</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on January 08, 2018.</p> Hello, React! - A Beginner’s Setup Tutorial http://aspiringcraftsman.com/2017/05/25/hello-react-a-beginners-setup-tutorial/ Aspiring Craftsman urn:uuid:58e62c02-c8ee-bb68-825c-b59007af7c7f Thu, 25 May 2017 14:23:59 +0000 React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process. <p>React <noindex></noindex> has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process.</p> <h2 id="a-simple-tutorial">A Simple Tutorial</h2> <p>This tutorial is merely intended to help walk you through the steps to getting a simple React example up and running. When you’re ready to dive into actually learning the React framework, a great list of tutorials can be found <a href="http://andrewhfarmer.com/getting-started-tutorials/">here.</a></p> <p>There are a several build, transpiler, or bundling tools from which to select when working with React. For this tutorial, we’ll be using be using Node, NPM, Webpack, and Babel.</p> <h2 id="step-1-install-node">Step 1: Install Node</h2> <p>Download and install Node for your target platform. Node distributions can be obtained <a href="https://nodejs.org/en/">here</a>.</p> <h2 id="step-2-create-a-project-folder">Step 2: Create a Project Folder</h2> <p>From a command line prompt, create a folder where you plan to develop your example.</p> <pre class="prettyprint">$&gt; mkdir hello-react </pre> <h2 id="step-3-initialize-project">Step 3: Initialize Project</h2> <p>Change directory into the example folder and use the Node Package Manager (npm) to initialize the project:</p> <pre class="prettyprint">$&gt; cd hello-react $&gt; npm init --yes </pre> <p>This results in the creation of a package.json file. While not technically necessary for this example, creating this file will allow us to persist our packaging and runtime dependencies.</p> <h2 id="step-4-install-react">Step 4: Install React</h2> <p>React is broken up into a core framework package and a package related to rendering to the Document Object Model (DOM).</p> <p>From the hello-react folder, run the following command to install these packages and add them to your package.json file:</p> <pre class="prettyprint">$&gt; npm install --save-dev react react-dom </pre> <h2 id="step-5-install-babel">Step 5: Install Babel</h2> <p>Babel is a transpiler, which is to say it’s a tool from converting one language or language version to another. In our case, we’ll be converting EcmaScript 2015 to EcmaScript 5.</p> <p>From the hello-react folder, run the following command to install babel:</p> <pre class="prettyprint">$&gt; npm install --save-dev babel-core </pre> <h2 id="step-6-install-webpack">Step 6: Install Webpack</h2> <p>Webpack is a module bundler. We’ll be using it to package all of our scripts into a single script we’ll include in our example Web page.</p> <p>From the hello-react folder, run the following command to install webpack globally:</p> <pre class="prettyprint">$&gt; npm install webpack --global </pre> <h2 id="step-7-install-babel-loader">Step 7: Install Babel Loader</h2> <p>Babel loader is a Webpack plugin for using Babel to transpile scripts during the bundling process.</p> <p>From the hello-react folder, run the following command to install babel loader:</p> <pre class="prettyprint">$&gt; npm install --save-dev babel-loader </pre> <h2 id="step-8-install-babel-presets">Step 8: Install Babel Presets</h2> <p>Babel presets are collections of plugins needed to support a given feature. For example, the latest version of babel-preset-es2015 at the time this writing will install 24 plugins which enables Babel to transpile ECMAScript 2015 to ECMAScript 5. We’ll be using presets for ES2015 as well as presets for React. The React presets are primarily needed for processing of <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX</a>.</p> <p>From the hello-react folder, run the following command to install the babel presets for both ES2015 and React:</p> <pre class="prettyprint">$&gt; npm install --save-dev babel-preset-es2015 babel-preset-react </pre> <h2 id="step-9-configure-babel">Step 9: Configure Babel</h2> <p>In order to tell Babel which presets we want to use when transpiling our scripts, we need to provide a babel config file.</p> <p>Within the hello-react folder, create a file named .babelrc with the following contents:</p> <pre class="prettyprint">{ "presets" : ["es2015", "react"] } </pre> <h2 id="step-10-configure-webpack">Step 10: Configure Webpack</h2> <p>In order to tell Webpack we want to use Babel, where our entry point module is, and where we want the output bundle to be created, we need to create a Webpack config file.</p> <p>Within the hello-react folder, create a file named webpack.config.js with the following contents:</p> <figure class="highlight"><pre><code class="language-javascript" data-lang="javascript"><span class="kd">const</span> <span class="nx">path</span> <span class="o">=</span> <span class="nx">require</span><span class="p">(</span><span class="s1">'path'</span><span class="p">);</span> <span class="nx">module</span><span class="p">.</span><span class="nx">exports</span> <span class="o">=</span> <span class="p">{</span> <span class="na">entry</span><span class="p">:</span> <span class="s1">'./app/index.js'</span><span class="p">,</span> <span class="na">output</span><span class="p">:</span> <span class="p">{</span> <span class="na">path</span><span class="p">:</span> <span class="nx">path</span><span class="p">.</span><span class="nx">resolve</span><span class="p">(</span><span class="s1">'dist'</span><span class="p">),</span> <span class="na">filename</span><span class="p">:</span> <span class="s1">'index_bundle.js'</span> <span class="p">},</span> <span class="na">module</span><span class="p">:</span> <span class="p">{</span> <span class="na">rules</span><span class="p">:</span> <span class="p">[</span> <span class="p">{</span> <span class="na">test</span><span class="p">:</span> <span class="sr">/</span><span class="se">\.</span><span class="sr">js$/</span><span class="p">,</span> <span class="na">loader</span><span class="p">:</span> <span class="s1">'babel-loader'</span><span class="p">,</span> <span class="na">exclude</span><span class="p">:</span> <span class="sr">/node_modules/</span> <span class="p">}</span> <span class="p">]</span> <span class="p">}</span> <span class="p">}</span></code></pre></figure> <h2 id="step-11-create-a-react-component">Step 11: Create a React Component</h2> <p>For our example, we’ll just be creating a simple component which renders the text “Hello, React!”.</p> <p>First, create an app sub-folder:</p> <pre class="prettyprint">$&gt; mkdir app </pre> <p>Next, create a file named app/index.js with the following content:</p> <figure class="highlight"><pre><code class="language-html" data-lang="html">import React from 'react'; import ReactDOM from 'react-dom'; class HelloWorld extends React.Component { render() { return ( <span class="nt">&lt;div&gt;</span> Hello, React! <span class="nt">&lt;/div&gt;</span> ) } }; ReactDOM.render(<span class="nt">&lt;HelloWorld</span> <span class="nt">/&gt;</span>, document.getElementById('root'));</code></pre></figure> <p>Briefly, this code includes the react and react-dom modules, defines a HelloWorld class which returns an element containing the text “Hello, React!” expressed using <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX syntax</a>, and finally renders an instance of the HelloWorld element (also using JSX syntax) to the DOM.</p> <p>If you’re completely new to React, don’t worry too much about trying to fully understand the code. Once you’ve completed this tutorial and have an example up and running, you can move on to one of the aforementioned tutorials, or work through <a href="https://facebook.github.io/react/docs/hello-world.html">React’s Hello World example</a> to learn more about the syntax used in this example.</p> <div class="note"> <p> Note: In many examples, you will see the following syntax: </p> <figure class="highlight"><pre><code class="language-html" data-lang="html">var HelloWorld = React.createClass( { render() { return ( <span class="nt">&lt;div&gt;</span> Hello, React! <span class="nt">&lt;/div&gt;</span> ) } });</code></pre></figure> <p> This syntax is how classes were defined in older versions of React and will therefore be what you see in older tutorials. As of React version 15.5.0 use of this syntax will produce the following warning: </p> <p style="color: red"> Warning: HelloWorld: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you&#8217;re not yet ready to migrate, create-react-class is available on npm as a drop-in replacement. </p> </div> <h2 id="step-12-create-a-webpage">Step 12: Create a Webpage</h2> <p>Next, we’ll create a simple html file which includes the bundled output defined in step 10 and declare a &lt;div&gt; element with the id “root” which is used by our react source in step 11 to render our HelloWorld component.</p> <p>Within the hello-react folder, create a file named index.html with the following contents:</p> <pre class="prettyprint">&lt;html&gt; &lt;div id="root"&gt;&lt;/div&gt; &lt;script src="./dist/index_bundle.js"&gt;&lt;/script&gt; &lt;/html&gt; </pre> <h2 id="step-13-bundle-the-application">Step 13: Bundle the Application</h2> <p>To convert our app/index.js source to ECMAScript 5 and bundle it with the react and react-dom modules we’ve included, we simply need to execute webpack.</p> <p>Within the hello-react folder, run the following command to create the dist/index_bundle.js file reference by our index.html file:</p> <pre class="prettyprint">$&gt; webpack </pre> <h2 id="step-14-run-the-example">Step 14: Run the Example</h2> <p>Using a browser, open up the index.html file. If you’ve followed all the steps correctly, you should see the following text displayed:</p> <pre class="prettyprint">Hello, React! </pre> <h2 id="conclusion">Conclusion</h2> <p>Congratulations! After completing this tutorial, you should have a pretty good idea about the steps involved in getting a basic React app up and going. Hopefully this will save some absolute beginners from spending too much time trying to piece these steps together.</p><img src="http://feeds.feedburner.com/~r/AspiringCraftsman/~4/yDmiJCyBJXs" height="1" width="1" alt=""/> Exploring TypeScript http://aspiringcraftsman.com/2016/08/30/exploring-typescript/ Aspiring Craftsman urn:uuid:883387f0-b828-6686-4ed9-77c6e9d3a87c Tue, 30 Aug 2016 18:01:06 +0000 A proposal to use TypeScript was recently made within my development team, so I’ve taken a bit of time to investigate the platform.  This article reflects my thoughts and conclusions on where the platform is at this point. <p dir="ltr"> A <noindex></noindex> proposal to use TypeScript was recently made within my development team, so I’ve taken a bit of time to investigate the platform.  This article reflects my thoughts and conclusions on where the platform is at this point. </p> <p> </p> <h2 dir="ltr"> TypeScript: What is It? </h2> <p dir="ltr"> TypeScript is a scripting language created by Microsoft which provides static typing and a class-based object-oriented programming paradigm for transpiling to JavaScript.  In contrast to other compile-to-javascript languages such as CoffeeScript and Dart, TypeScript is a superset of JavaScript which means that TypeScript introduces syntax enhancements to the JavaScript language. </p> <p dir="ltr"> <img src="https://lh4.googleusercontent.com/5dOim07aCnQUsvhT46DKVtw9T-gNq3djeIrZpGC_PABTOD1yEL8k-FzoND8lpEEmgGHU7LboXOnKA7YWZwLqB4ruWrw36-kKN1UznQ1O-XOa67fo1k5K_xAFozSN3KdfLWbtJY6I" alt="" width="470" height="468" /> </p> <p> </p> <h2 dir="ltr"> Recent Rise In Popularity </h2> <p dir="ltr"> TypeScript made it’s debut in late 2012 and was first released in April 2014.  Community interest has been fairly marginal since it’s debut, but has shown an increase since an announcement that the next version of Google’s popular Angular framework would be written in TypeScript. </p> <p dir="ltr"> The following Google Trends chart shows the interest parallel between Angular 2 and TypeScript from 2014 to present: </p> <p dir="ltr"> <img src="https://lh3.googleusercontent.com/lXJD30Ta9Zl1TL2HYqasJL_os6IzFdHurk9amcVSbUVnAQOg5hy4lyn0QfdwRCTcQTehdIoBnw7r5tpm_7N5Ai1flIVxiT7jLsxtY19loQWqW9AAQ5WtmbtAfFhQamZpYkjLy8sB" alt="" width="624" height="193" /> </p> <p> </p> <h2 dir="ltr"> The Good </h2> <h3 dir="ltr"> Type System </h3> <p dir="ltr"> TypeScript provides an optional type system which can aid in catching certain types of programing errors at compile time.  The information derived from the type system also serves as the foundation for most of the tooling surrounding TypeScript. </p> <p dir="ltr"> The following is a simple example showing a basic usage of the type system: </p> <pre class="prettyprint">interface Person { firstName: string; lastName: string; } class Greeter { greeting: string; constructor(message: string) { this.greeting = message; } greet(person: Person) { return this.greeting + " " + person.firstName + " " + person.lastName; } } let greeter = new Greeter("Hello,"); let person = { firstName: "John", lastName: "Doe" }; document.body.innerHTML = greeter.greet(person); </pre> <p dir="ltr"> In this example, a Person interface is declared with two string properties: firstName and lastName.  Next, a Greeter class is created with a greet() function which is declared to take a parameter of type Person.  Next, instances of Greeter and Person are instantiated and the Greeter instance’s greet() function is invoked passing in the Person instance.  At compile time, TypeScript is able to detect whether the object passed to the greet() function conforms to the Person interface and whether the values assigned to the expected properties are of the expected type. </p> <h3 dir="ltr"> Tooling </h3> <p dir="ltr"> While the type system and programming paradigm introduced by TypeScript are its key features, it’s really the tooling facilitated by the type system that makes the platform shine.  Being notified of syntax errors at compile time is helpful, but it’s really the productivity that stems from features such as design-time type checking, intellisense/code-completion, and refactoring that make TypeScript compelling. </p> <p dir="ltr"> TypeScript is currently supported by many popular IDEs including Visual Studio, WebStorm, Sublime Text, Brackets, and Eclipse. </p> <h3 dir="ltr"> EcmaScript Foundation </h3> <p dir="ltr"> One of the differentiators of TypeScript from other languages which transpile to JavaScript (CoffeeScript, Dart, etc.) is that TypeScript builds upon the JavaScript language.  This means that all valid JavaScript code is valid TypeScript code. </p> <h3 dir="ltr"> Idiomatic JavaScript Generation </h3> <p dir="ltr"> One of the goals of the TypeScript team was to ensure the TypeScript compiler emitted idiomatic JavaScript.  This means the code produced by the TypeScript compiler is readable and generally follows normal JavaScript conventions. </p> <p> </p> <h2 dir="ltr"> The Not So Good </h2> <h3 dir="ltr"> Type Definitions and 3rd-Party Libraries </h3> <p dir="ltr"> Typescript requires type definitions to be created for 3rd-party code to realize many of the benefits of the tooling.  While  the <a href="https://github.com/DefinitelyTyped/DefinitelyTyped">DefinitelyTyped </a>project provides type definitions for the most popular JavaScript libraries used today, there will probably be the occasion where the library you want to use has no type definition file. </p> <p dir="ltr"> Moreover, interfaces maintained by 3rd-party sources are somewhat antithetical to their primary purpose.  Interfaces should serve as contracts for the behavior of a library.  If the interfaces are maintained by a 3rd-party, however, they can’t be accurately described as “contracts” since no implicit promise is being made by the library author that the interface being provided accurately matches the library’s behavior.  It’s probably the case that this doesn’t prove to be much of an issue in practice, but at minimum I would think relying upon type definitions created by 3rd parties would eventually lead to the available type definitions lagging behind new releases of the libraries being used. </p> <h3 dir="ltr"> Type System Overhead </h3> <p dir="ltr"> Introducing a typesystem is a bit of a double-edged sword.  While a type system can provide a lot of benefits, it also adds syntactical overhead to a codebase.  In some cases this can result in the code you maintain actually being harder to read and understand than the code being generated.  This can be illustrated using Anders Hejlsberg’s example presented at Build 2014. </p> <p dir="ltr"> The TypeScript source in the first listing shows a generic sortBy method which takes a callback for retrieving the value by which to sort while the second listing shows the generated JavaScript source: </p> <pre class="prettyprint">interface Entity { name: string; } function sortBy(a: T[], keyOf: (item: T) =&gt; any): T[] { var result = a.slice(0); result.sort(function(x, y) { var kx = keyOf(x); var ky = keyOf(y); return kx &gt; ky ? 1: kx &lt; ky ? -1 : 0; }); return result; } var products = [ { name: "Lawnmower", price: 395.00, id: 345801 }, { name: "Hammer", price: 5.75, id: 266701 }, { name: "Toaster", price: 19.95, id: 400670 }, { name: "Padlock", price: 4.50, id: 560004 } ]; var sorted = sortBy(products, x =&gt; x.price); document.body.innerText = JSON.stringify(sorted, null, 4); </pre> <pre class="prettyprint"> function sortBy(a, keyOf) { var result = a.slice(0); result.sort(function (x, y) { var kx = keyOf(x); var ky = keyOf(y); return kx &gt; ky ? 1 : kx &lt; ky ? -1 : 0; }); return result; } var products = [ { name: "Lawnmower", price: 395.00, id: 345801 }, { name: "Hammer", price: 5.75, id: 266701 }, { name: "Toaster", price: 19.95, id: 400670 }, { name: "Padlock", price: 4.50, id: 560004 } ]; var sorted = sortBy(products, function (x) { return x.price; }); document.body.innerText = JSON.stringify(sorted, null, 4); </pre> <p>Comparing the two signatures, which is easier to understand?</p> <h3 id="typescript">TypeScript</h3> <p><code class="highlighter-rouge"> function sortBy&lt;T&gt;(a: T[], keyOf: (item: T) =&gt; any): T[]</code></p> <h3 id="javascript">JavaScript</h3> <p><code class="highlighter-rouge"> function sortBy(a, keyOf)</code></p> <p>It might be reasoned that the TypeScript version should be easier to understand given that it provides more information, but many would disagree that this is in fact the case.  The reason for this is that the TypeScript version adds quite a bit of syntax to explicitly describe information that can otherwise be deduced fairly easily.  In many ways this is similar to how we process natural language.  When we communicate, we don’t encode each word with its grammatical function (e.g. “I [subject] bought [past tense verb] you [indirect object] a [indefinite article] gift [direct object].”)  Rather, we rapidly and subconsciously make guesses based on familiarity with the vocabulary, context, convention and other such signals.</p> <p dir="ltr">  In the case of the sortBy example, we can guess at the parameters and return type for the function faster than we can parse the type syntax.  This becomes even easier if descriptive names are used (e.g. sortByKey(array, keySelector)).  Sometimes implicit expression is simply easier to understand. </p> <p dir="ltr"> Now to be fair, there are cases where TypeScript is arguably going to be more clear than the generated JavaScript (and for similar reasons).  Consider the following listing: </p> <pre class="prettyprint">class Auto{ constructor(public wheels = 4, public doors?){ } } var car = new Auto(); car.doors = 2; </pre> <pre class="prettyprint">var Auto = (function () { function Auto(wheels, doors) { if (wheels === void 0) { wheels = 4; } this.wheels = wheels; this.doors = doors; } return Auto; }()); var car = new Auto(); car.doors = 2; </pre> <p dir="ltr"> In this example, the TypeScript version results in less syntax noise than the generated JavaScript version.   Of course, this is a comparison between TypeScript and it’s generated syntax rather than the following syntax many may have used: </p> <p><code class="highlighter-rouge">wheels = wheels || 4;</code></p> <p><span style="color: #000000; font-size: 1.4em; line-height: 1.5em;">Community Alignment</span></p> <p dir="ltr"> While TypeScript is a superset of JavaScript, this deserves some qualification.  Unlike languages such as CoffeeScript and Dart which also compile to JavaScript, TypeScript starts with the EcmaScript specification as the base of it’s language.  Nevertheless, TypeScript is still a separate language. </p> <p dir="ltr"> A team’s choice to maintain an application in TypeScript over JavaScript isn’t quite the same thing as choosing to implement an application in C# version 6 instead of C# version 5.  TypeScript isn’t the promise: “Programming with the ECMAScript of tomorrow &#8230; today!”.  Rather, it’s a language that layers a different programming paradigm on top of JavaScript.  While you can choose how much of the feature superset and programming paradigm you wish to use, the more features and approaches peculiar to TypeScript that are adopted the further the codebase will diverge from standard JavaScript syntax and conventions. </p> <p dir="ltr"> A codebase that fully leverages TypeScript can tend to look far more like C# than standard JavaScript.  In many ways, TypeScript is the perfect front-end development environment for C# developers as it provides a familiar syntax and programming paradigm to which they are already accustomed.  Unfortunately, developers who spend most of their time in C# often struggle with JavaScript syntax, conventions, and patterns.  The same might be expected to be true for TypeScript developers who utilize the language to emulate object-oriented development in C#. </p> <p dir="ltr"> Ultimately, the real negative I see with this is that (at least right now) TypeScript doesn’t represent how the majority of Web development is being done in the community.  This has implications on the availability of documentation, availability of online help, candidate pool size, marketability, and skill portability. </p> <p dir="ltr"> Consider the following chart which compares the current job openings available for JavaScript and TypeScript: </p> <p dir="ltr"> <img title="Points scored" src="https://lh3.googleusercontent.com/d4AA-5New_zh1zXkw4CJVkFmZR4jh8GkN-T0JRrdmaXuh4rysP0coWY7ukPLj3C_Yg-JEv72A96dwv2CrD7GZP2ZvzflFiOuvWdMlb4uVbIjRlYKM4jhxA4-1TDD6a7-90OSd1am" alt="" width="600" height="371" /> </p> <p dir="ltr"> Source: simplyhired.com &#8211; August 2016 </p> <p>Now, the fact that there may be far less TypeScript jobs out there than JavaScript jobs doesn’t mean that TypeScript isn’t going to be the next big thing.  What it does mean, however, is that you are going to experience less friction in the aforementioned areas if you stick with standard EcmaScript.</p> <h2 dir="ltr"> Alternatives </h2> <p dir="ltr"> For those considering TypeScript, the following are a couple of options you might consider before converting just yet. </p> <h3 dir="ltr"> ECMAScript 2015 </h3> <p dir="ltr"> If you’re  interested in TypeScript and currently still writing ES5 code, one step you might consider is to begin using ES2015.  In John Papa’s article: “<a href="https://johnpapa.net/es5-es2015-typescript/">Understanding ES5, ES2015 and TypeScript</a>”, he writes: </p> <p>Why Not Just use ES2015?  That’s a great option! Learning ES2015 is a huge leap from ES5. Once you master ES2015, I argue that going from there to TypeScript is a very small step.</p> <p>In many ways, taking the time to learn ECMAScript 2015 is the best option even if you think you’re ready to start using TypeScript.  Making the journey from ES5 to ES2015 and then later on to TypeScript will help you to clearly understand which new features are standard ECMAScript and which are TypeScript … knowledge you’re likely to be fuzzy on if you move straight from ES5 to TypeScript.</p> <h3 dir="ltr"> Flow </h3> <p dir="ltr"> If you’ve already become convinced that you need a type system for JavaScript development or you’re just looking to test the waters, you might consider a lighter-weight alternative to the TypeScript platform: Facebook’s <a href="https://flowtype.org/">Flow </a>project.  Flow is a static type checker for JavaScript designed to gain static type checking benefits  without losing the “feel” of coding in JavaScript and in some cases <a href="https://djcordhose.github.io/flow-vs-typescript/2016_hhjs.html#/">it does a better job</a> at catching type-related errors than TypeScript. </p> <p>For the most part, Flow’s type system is identical to that of TypeScript, so it shouldn’t be too hard to convert to TypeScript down the road if desired.  Several IDEs have Flow support including Web Storm, Sublime Text, Atom, and of course Facebook’s own Nuclide.</p> <p>As of August 2016, <a href="https://flowtype.org/blog/2016/08/01/Windows-Support.html">Flow also supports Windows</a>.  Unfortunately this support has only recently become available, so Flow doesn’t yet enjoy the same IDE support on Windows as it does on OSX and Linux platforms.  IDE support can likely be expected to improve going forward.</p> <h3 dir="ltr"> Test-Driven Development </h3> <p dir="ltr"> If you’ve found the primary appeal of TypeScript to be the immediate feedback you receive from the tooling, another methodology for achieving this (which has far greater benefits) is the practice of Test-Driven Development (TDD). The TDD methodology not only provides a rapid feedback cycle, but (if done properly) results in duplication-free code that is more maintainable by constraining the team to only developing the behavior needed by the application, and results in a regression-test suite which provides a safety net for future modifications as well as documentation for how the system is intended to be used. Of course, these same benefits can be realized with TypeScript development as well, but teams practicing TDD may find less need for TypeScript’s compiler-generated error checking. </p> <p> </p> <h2 dir="ltr"> Conclusion </h2> <p dir="ltr"> After taking some time to explore TypeScript, I’ve found that aspects of its ecosystem are very compelling, particularly the tooling that’s available for the platform.  Nevertheless, it still seems a bit early to know what role the platform will play in the future of Web development. </p> <p dir="ltr"> Personally, I like the JavaScript language and, while I see some advantages of introducing type checking, I think a wiser course for now would be to invest in learning EcmaScript 2015 and keep a watchful eye on TypeScript adoption going forward. </p><img src="http://feeds.feedburner.com/~r/AspiringCraftsman/~4/BQMmowjVXfA" height="1" width="1" alt=""/> Git on Windows: Whence Cometh Configuration http://aspiringcraftsman.com/2016/08/22/git-on-windows-whence-cometh-configuration/ Aspiring Craftsman urn:uuid:4c7e32aa-e18f-b14f-8c6a-bd956394a9d8 Mon, 22 Aug 2016 09:08:03 +0000 I recently went through the process of setting up a new development environment on Windows which included installing Git for Windows. At one point in the course of tweaking my environment, I found myself trying to determine which config file a particular setting originated. The command ‘git config –list’ showed the setting, but ‘git config –list –system’, ‘git config –list –global’, and ‘git config –list –local’ all failed to reflect the setting. Looking at the options for config, I discovered you can add a ‘–show-origin’ which led to a discovery: Git for Windows has an additional location from which it derives your configuration. <p>I recently went through the process of setting up a new development environment on Windows which included installing <a href="https://git-scm.com/">Git <noindex></noindex> for Windows</a>. At one point in the course of tweaking my environment, I found myself trying to determine which config file a particular setting originated. The command ‘git config –list’ showed the setting, but ‘git config –list –system’, ‘git config –list –global’, and ‘git config –list –local’ all failed to reflect the setting. Looking at the options for config, I discovered you can add a ‘–show-origin’ which led to a discovery: Git for Windows has an additional location from which it derives your configuration.</p> <p>It turns out, since the last time I installed git on Windows, <a href="https://github.com/git-for-windows/git/commit/153328ba92ca6cf921d2272fa7e355603cbf71b7">a change was made</a> for the purposes of sharing git configuration across different git projects (namely, libgit2 and Git for Windows) where a Windows-specific location is now used as the lowest setting precedence (i.e. the default settings). This is the file: C:\ProgramData\Git\config. It doesn’t appear git added a way to list or edit this file as a well-known location (e.g. ‘git config –list windows’), so it’s not particularly discoverable aside from knowing about the ‘–show-origin’ switch.</p> <p>So the order in which Git for Windows sources configuration information is as follows:</p> <ol> <li>C:\ProgramData\Git\config</li> <li>system config (e.g. C:\Program Files\Git\mingw64\etc\gitconfig)</li> <li>global config (%HOMEPATH%.gitconfig</li> <li>local config (repository-specific .git/config)</li> </ol> <p>Perhaps this article might help the next soul who finds themselves trying to figure out from where some seemingly magical git setting is originating.</p><img src="http://feeds.feedburner.com/~r/AspiringCraftsman/~4/gdQ6d1EZSC8" height="1" width="1" alt=""/> Separation of Concerns: Application Builds &amp; Continuous Integration http://aspiringcraftsman.com/2016/02/28/separation-of-concerns-application-builds-continuous-integration/ Aspiring Craftsman urn:uuid:8e3bb423-0a0e-3c7e-5fb8-81a78316f9da Sun, 28 Feb 2016 17:32:48 +0000 I’ve always had an interest in application build processes. From the start of my career, I’ve generally been in the position of establishing the solution architecture for the projects I’ve participated in and this has usually involved establishing a baseline build process. <p>I’ve always had an interest in application build processes. From the start of my career, I’ve generally been in the position of establishing the solution architecture for the projects I’ve participated in and this has usually involved establishing a baseline build process.</p> <p>My <noindex></noindex> career began as a Unix C developer while still in college where much of my responsibilities required writing tools in both C and various Unix shell scripting languages which were deployed to other workstations throughout the country. From there, I moved on to Unix C-CGI Web development and worked a number of years with Make files. With the advent of Java, I begin using tools like Ant and Maven for several more years before switching to the .Net platform where I used open source build tools like NAnt until Microsoft introduced MSBuild with its 2.0 release. Upon moving to the Austin, TX area, I was greatly influenced by what was the early seat of the Alt.Net movement. It was there where I abandoned what in hindsight has always been a ridiculous idea … trying to script a build using XML. For the next 4-5 years, I used Rake to define all of my builds. Starting last year, I began using Gulp and associated tooling on the Node platform for authoring .Net builds.</p> <p>Throughout this journey of working with various build technologies, I’ve formed a few opinions along the way. One of these opinions is that the Build process shouldn’t be coupled to the Continuous Integration process.</p> <p>A project should have a build process which exists and can be executed independent of the particular continuous integration tool one chooses. This allows builds to be created and maintained on the developer’s local machine. The particular build steps involved in building a given application are inherently part of its ontology. What compilers and preprocessors need to be used, how dependencies are obtained and published, when and how configuration values are supplied for different environments, how and where automated test suites are run, how the application distribution is created … all of these are concerns whose definition and orchestration are particular to a given project. Such concerns should be encapsulated in a build script which lives with the rest of the application source, not as discrete build steps defined within your CI tool.</p> <p>Ideally, builds should never break, but when they do it’s important to resolve the issue as quickly as possible. Not being able to run a build locally means potentially having to repeatedly introduce changes until the build is fixed. This tends to pollute the source code commit history with comments like: “<em>Fixing the build</em>”, “<em>Fixing the build for realz this time</em>”, and “<em>Please let this be it … I’m ready to go home</em>”. Of course, there are times when a build can break because of environmental issues that may not be mirrored locally (e.g. lack of disk space, network related issues, 3rd-party software dependencies, etc.), but encapsulating as much of your build as possible goes a long way to keeping builds running along smoothly. Anyone on your team should be able to clone/check-out the project, issue a single command from the command line (e.g. gulp, rake, psake, etc.) and watch the full build process execute including any pre-processing steps, compilation, distribution packaging and even deployment to a target environment.</p> <p>Aside from being able to run a build locally, decoupling the build from the CI process allows the technologies used by each to vary independently. Switching from one CI tool to another should ideally just require installing the software, pointing it to your source control, defining the single step to issue the build, and defining the triggers that initiate the process.</p> <p>The creation of a project distribution and the scheduling mechanism for how often this happens are separate concerns. Just because a CI tool allows you to script out your build steps doesn’t mean you should.</p><img src="http://feeds.feedburner.com/~r/AspiringCraftsman/~4/9OWDyBtNRM8" height="1" width="1" alt=""/> Survey of Entity Framework Unit of Work Patterns http://aspiringcraftsman.com/2015/11/01/survey-of-entity-framework-unit-of-work-patterns/ Aspiring Craftsman urn:uuid:c009a2b3-5be8-87ea-f4b3-b30ce9247fca Sun, 01 Nov 2015 21:11:13 +0000 Earlier this year I joined a development team which chose Entity Framework for the persistence needs of a new greenfield project. While I’ve worked on a few projects which used Entity Framework here and there over the years, the bulk of my experience has been with NHibernate and, more recently, Dapper.Net. As a result, there hasn’t been all that much occasion for me to explore it in any level of depth until this year. <p>Earlier <noindex></noindex> this year I joined a development team which chose Entity Framework for the persistence needs of a new greenfield project. While I’ve worked on a few projects which used Entity Framework here and there over the years, the bulk of my experience has been with NHibernate and, more recently, Dapper.Net. As a result, there hasn’t been all that much occasion for me to explore it in any level of depth until this year.</p> <p>One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.</p> <h2 id="unit-of-work">Unit of Work</h2> <p>To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.</p> <p>A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with Entity Framework.</p> <p>With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.</p> <h2 id="implicit-transactions">Implicit Transactions</h2> <p>The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet<T> instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.</T></p> <p>Here’s an example:</p> <pre class="prettyprint">public Customer CreateCustomer(CreateCustomerRequest request) { Customer customer = null; using (var context = new MyStoreContext()) { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; context.Customers.Add(customer); context.SaveChanges(); return customer; } } </pre> <p>The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.</p> <p>If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.</p> <h2 id="transactionscope">TransactionScope</h2> <p>Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:</p> <pre class="prettyprint">public Customer CreateCustomer(CreateCustomerRequest request) { Customer customer = null; using (var transaction = new TransactionScope()) { using (var context = new MyStoreContext()) { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; context.Customers.Add(customer); context.SaveChanges(); transaction.Complete(); } return customer; } } </pre> <p>In general, I find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.</p> <p>Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.</p> <p>While I find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code. While it’s a viable choice, I would recommend inverting the concerns of managing the Unit of Work boundary as shown in approaches we’ll look at later.</p> <h2 id="adonet-transactions">ADO.Net Transactions</h2> <p>This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:</p> <pre class="prettyprint">public Customer CreateCustomer(CreateCustomerRequest request) { Customer customer = null; var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString; using (var connection = new SqlConnection(connectionString)) { connection.Open(); using (var transaction = connection.BeginTransaction()) { using (var context = new MyStoreContext(connection)) { context.Database.UseTransaction(transaction); try { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; context.Customers.Add(customer); context.SaveChanges(); } catch (Exception e) { transaction.Rollback(); throw; } } transaction.Commit(); return customer; } } </pre> <p>As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.</p> <h2 id="entity-framework-transactions">Entity Framework Transactions</h2> <p>The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:</p> <pre class="prettyprint">public Customer CreateCustomer(CreateCustomerRequest request) { Customer customer = null; using (var context = new MyStoreContext()) { using (var transaction = context.Database.BeginTransaction()) { try { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; context.Customers.Add(customer); context.SaveChanges(); transaction.Commit(); } catch (Exception e) { transaction.Rollback(); throw; } } } return customer; } </pre> <p>This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.</p> <h2 id="unit-of-work-repository-manager">Unit of Work Repository Manager</h2> <p>The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic <a href="http://www.asp.net/mvc/overview/older-versions/getting-started-with-ef-5-using-mvc-4/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application">here</a>. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:</p> <pre class="prettyprint">public interface IUnitOfWork { ICustomerRepository CustomerRepository { get; } IOrderRepository OrderRepository { get; } void Save(); } public class UnitOfWork : IDisposable, IUnitOfWork { readonly MyContext _context = new MyContext(); ICustomerRepository _customerRepository; IOrderRepository _orderRepository; public ICustomerRepository CustomerRepository { get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); } } public IOrderRepository OrderRepository { get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); } } public void Dispose() { if (_context != null) { _context.Dispose(); } } public void Save() { _context.SaveChanges(); } } public class CustomerService : ICustomerService { readonly IUnitOfWork _unitOfWork; public CustomerService(IUnitOfWork unitOfWork) { _unitOfWork = unitOfWork; } public void CreateCustomer(CreateCustomerRequest request) { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; _unitOfWork.CustomerRepository.Add(customer); _unitOfWork.Save(); } } </pre> <p>It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet<T> are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.</T></p> <p>First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).</p> <p>Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.</p> <p>Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.</p> <p>Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.</p> <h2 id="injected-unit-of-work-and-repositories">Injected Unit of Work and Repositories</h2> <p>For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.</p> <p>Here is an example:</p> <pre class="prettyprint">public class CustomerService : ICustomerService { readonly IUnitOfWork _unitOfWork; readonly ICustomerRepository _customerRepository; public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository) { _unitOfWork = unitOfWork; _customerRepository = customerRepository; } public void CreateCustomer(CreateCustomerRequest request) { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; _customerRepository.Add(customer); _unitOfWork.Save(); } } </pre> <p>While this approach improves upon the opaque design of the Repository Manager, there are several issues I find with this approach as well.</p> <p>Similar to the first example, this UnitOfWork implementation is still semantically coupled to how Entity Framework is urging you to think about things. Entity Framework wants you to call SaveChanges() whenever you’re ready to flush any INSERT, UPDATE, or DELETE operations you’ve issued against the database and this abstraction basically surfaces this behavior. If you were to use an alternate framework that supported a different flushing model (e.g. NHibernate), you likely wouldn’t end up with the same abstraction.</p> <p>Moreover, this approach has no definitive Unit of Work boundary. With this approach, you aren’t defining a logical Unit of Work, but are merely injecting a UnitOfWork you can participate within. When you invoke the underlying DBContext.SaveChanges() method, it isn’t explicit what work will be committed.</p> <p>While this approach corrects a few design issues I find with the Repository Manager, overall I like this approach even less. At least with the Repository Manager approach you have a defined Unit of Work boundary which is kind of the whole point. My recommendation would be to avoid this approach as well.</p> <h2 id="repository-savechanges-method">Repository SaveChanges Method</h2> <p>The next strategy is basically a variation on the previous one. Rather than injecting a separate type whose sole purpose is to provide an indirect way to call the SaveChanges() method, some merely expose this through the Repository:</p> <pre class="prettyprint">public class CustomerService : ICustomerService { readonly ICustomerRepository _customerRepository; public CustomerService(ICustomerRepository customerRepository) { _customerRepository = customerRepository; } public void CreateCustomer(CreateCustomerRequest request) { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; _customerRepository.Add(customer); _customerRepository.SaveChanges(); } } </pre> <p>This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.</p> <h2 id="unit-of-work-per-request">Unit of Work Per Request</h2> <p>A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.</p> <p>There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:</p> <pre class="prettyprint">builder.RegisterType&lt;MyDbContext&gt;() .As&lt;DbContext&gt;() .InstancePerRequest() .OnActivating(x =&gt; { // start a transaction }) .OnRelease(context =&gt; { try { // commit or rollback the transaction } catch (Exception e) { // log the exception throw; } }); public class SomeService : ISomeService { public void DoSomething() { // do some work } } </pre> <p>While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.</p> <p>While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.</p> <h2 id="instantiated-unit-of-work">Instantiated Unit of Work</h2> <p>The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:</p> <pre class="prettyprint">public class CustomerService : ICustomerService { readonly ICustomerRepository _customerRepository; public CustomerService(ICustomerRepository customerRepository) { _customerRepository = customerRepository; } public void CreateCustomer(CreateCustomerRequest request) { using (var unitOfWork = new UnitOfWork()) { try { customer = new Customer { FirstName = request.FirstName, LastName = request.LastName }; _customerRepository.Add(customer); unitOfWork.Commit(); } catch (Exception ex) { unitOfWork.Rollback(); } } } } </pre> <p>Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.</p> <h2 id="injected-unit-of-work-factory">Injected Unit of Work Factory</h2> <p>This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:</p> <pre class="prettyprint">public class CustomerService : ICustomerService { readonly ICustomerRepository _customerRepository; readonly IUnitOfWorkFactory _unitOfWorkFactory; public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository) { _customerRepository = customerRepository; _unitOfWorkFactory = unitOfWorkFactory; } public void CreateCustomer(CreateCustomerRequest request) { using (var unitOfWork = _unitOfWorkFactory.Create())