Los Techies http://feed.informer.com/digests/ZWDBOR7GBI/feeder Los Techies Respective post owners and feed distributors Thu, 08 Feb 2018 14:40:57 +0000 Feed Informer http://feed.informer.com/ Composite UIs for Microservices: Vertical Slice APIs https://jimmybogard.com/composite-uis-for-microservices-vertical-slice-apis/ Jimmy Bogard urn:uuid:7eb595ef-816a-3a60-cf74-b51019617a20 Wed, 15 May 2019 21:12:21 +0000 <p>This is a recent follow-up pattern to my series on Composite UIs in Microservices, which explores various strategies for composing at the edges. Other posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-a-primer/">A primer</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-composition-options/">Composition options</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-client-composition">Client composition</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-server-composition">Server composition</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-data-composition/">Data composition</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-vertical-slice-apis/">Vertical Slice APIs</a></li> </ul> <p>When looking at a client-side composition, the next logical</p> <p>This is a recent follow-up pattern to my series on Composite UIs in Microservices, which explores various strategies for composing at the edges. Other posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-a-primer/">A primer</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-composition-options/">Composition options</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-client-composition">Client composition</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-server-composition">Server composition</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-data-composition/">Data composition</a></li> <li><a href="https://jimmybogard.com/composite-uis-for-microservices-vertical-slice-apis/">Vertical Slice APIs</a></li> </ul> <p>When looking at a client-side composition, the next logical question is "how do my client-side components communicate with services?". Typically, there are two main approaches:</p> <ul> <li><a href="https://microservices.io/patterns/apigateway.html">API Gateway</a></li> <li><a href="https://microservices.io/patterns/apigateway.html#variation-backends-for-frontends">Backend-for-frontend</a></li> </ul> <p>In many of my applications and systems I deal with, I don't necessarily always have a single page that composes together. Instead of something like this:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2017/Picture0003.png" alt="Single UI with sections"></p> <p>Where I have a single page that composes many widgets (think Amazon), I instead of large numbers of pages that are <em>wholly owned</em> by specific services. There's then some sort of menu to choose between them:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/4/2019/Picture0064.png" alt="Tabbed pages"></p> <p>In this application, I have a single shell that composes multiple services UIs, but each service UI stretches all the way from the backend to the frontend.</p> <p>This service widget needs to talk to the front end, so my natural inclination was to look at "backend for frontend". But this picture looks like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/4/2019/Picture0065.png" alt=""></p> <p>It's still API Gateways, just slightly segregated down per application.</p> <p>API Gateways are great in situations where you have highly segregated applications and APIs with highly separate release pipelines. But in our typical backend scenarios, we control the UI and API deployments, so any kind of composition in the API layer is complete overkill.</p> <p>For these situation, we introduce vertical slice APIs</p> <h3 id="verticalsliceapis">Vertical Slice APIs</h3> <p>When we have whole pages or sections of the application wholly owned by a specific service, it's advantageous to go ahead and couple a specific API to that page or screen. It's cheaper and simpler for the UI to "know" exactly what API and request/response to call. So we start with a group of users, using a single system, that composes service-specific UI components together:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/4/2019/Picture0067.png" alt=""></p> <p>And a second system with a different set of users that also composes service-specific UI components together:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/4/2019/Picture0068.png" alt=""></p> <p>Both systems consume bother services, but how should we build the APIs for each? With an API gateway, we'd have a single pinch point that <em>both</em> systems need to go through for <em>both</em> services. With BFF, we'd have <em>two</em> API gateways that then mediate for <em>both</em> services.</p> <p>Vertical slice APIs are different. Instead of mediating through API gateways, we create purpose-built APIs for each system:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/4/2019/Picture0069.png" alt=""></p> <p>Each service UI communicates to its own system-specific service API. That service API is intended only for that service - and because our service boundaries are <em>logical</em> not <em>physical</em>, the service boundary extends all the way to the UI.</p> <p>With this approach, if I need to change the UI, I can change the API without worrying about any other potential consumers. If I need to change the API, I've only got one API consumer to worry about. It's vertical slice architecture now extended to the APIs we create for composite UIs.</p> <p>There are some downsides to this approach - mainly it results in more APIs. But like vertical slice application architecture, this intentional decoupling means we're much faster in building out screens and APIs.</p> <p>Additionally, I really don't have to worry about things like Swagger, since my APIs are completely purpose-built for a single service. When we build features, the UI and API get designed and built together. For desktop applications, we'll go so far as to share the API model request/response objects as assemblies, so that we don't have to "generate" any kind of client SDK or response types.</p> <p>When we see a natural coupling of front end and back end, we can build along these vertical slices to ensure that we can add and modify code without worrying about affecting any other consumer.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=NXJCyO9bLTQ:l3kxJMkGx3c:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=NXJCyO9bLTQ:l3kxJMkGx3c:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=NXJCyO9bLTQ:l3kxJMkGx3c:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=NXJCyO9bLTQ:l3kxJMkGx3c:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=NXJCyO9bLTQ:l3kxJMkGx3c:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/NXJCyO9bLTQ" height="1" width="1" alt=""/> MediatR 7.0.0 Released https://jimmybogard.com/mediatr-7-0-0-released/ Jimmy Bogard urn:uuid:8888e5b5-035f-8bc4-1550-436783f1c495 Thu, 02 May 2019 13:26:28 +0000 <p>Release notes:</p> <ul> <li><a href="https://github.com/jbogard/MediatR/releases/tag/v7.0.0">MediatR 7.0.0</a></li> <li><a href="https://github.com/jbogard/MediatR.Extensions.Microsoft.DependencyInjection/releases/tag/v7.0.0">MediatR.Extensions.Microsoft.DependencyInjection 7.0.0</a></li> </ul> <p>It's a major release bump because of a breaking change in the API of the post-processor.</p> <p>Enjoy!</p> <p>Release notes:</p> <ul> <li><a href="https://github.com/jbogard/MediatR/releases/tag/v7.0.0">MediatR 7.0.0</a></li> <li><a href="https://github.com/jbogard/MediatR.Extensions.Microsoft.DependencyInjection/releases/tag/v7.0.0">MediatR.Extensions.Microsoft.DependencyInjection 7.0.0</a></li> </ul> <p>It's a major release bump because of a breaking change in the API of the post-processor.</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=VNgs7-FyYi0:UUKyibYXG20:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=VNgs7-FyYi0:UUKyibYXG20:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=VNgs7-FyYi0:UUKyibYXG20:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=VNgs7-FyYi0:UUKyibYXG20:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=VNgs7-FyYi0:UUKyibYXG20:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/VNgs7-FyYi0" height="1" width="1" alt=""/> AutoMapper 8.1.0 Released https://jimmybogard.com/automapper-8-1-0-released/ Jimmy Bogard urn:uuid:f943d772-03e5-59d8-0f0f-ed873624ed61 Thu, 25 Apr 2019 21:13:56 +0000 <p>Today we released AutoMapper 8.1.0:</p> <ul> <li><a href="http://docs.automapper.org/en/stable/8.0-Upgrade-Guide.html">Upgrade Guide for 8.0</a></li> <li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v8.1.0">Release Notes</a></li> </ul> <p>AutoMapper 8.1 adds a major new feature - <a href="https://docs.automapper.org/en/latest/Attribute-mapping.html">attribute-based maps</a>. Attribute maps let you easily declare maps on destination types when you have straightforward scenarios. Instead of:</p> <pre><code class="language-c#">public class OrderProfile { public OrderProfile() { CreateMap&lt;Order,</code></pre> <p>Today we released AutoMapper 8.1.0:</p> <ul> <li><a href="http://docs.automapper.org/en/stable/8.0-Upgrade-Guide.html">Upgrade Guide for 8.0</a></li> <li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v8.1.0">Release Notes</a></li> </ul> <p>AutoMapper 8.1 adds a major new feature - <a href="https://docs.automapper.org/en/latest/Attribute-mapping.html">attribute-based maps</a>. Attribute maps let you easily declare maps on destination types when you have straightforward scenarios. Instead of:</p> <pre><code class="language-c#">public class OrderProfile { public OrderProfile() { CreateMap&lt;Order, OrderIndexModel&gt;(); CreateMap&lt;Order, OrderEditModel&gt;(); CreateMap&lt;Order, OrderCreateModel&gt;(); } } </code></pre> <p>You can declare your type maps directly on the destination types themselves with <code>AutoMapAttribute</code>:</p> <pre><code class="language-c#">[AutoMap(typeof(Order))] public class OrderIndexModel { // members } [AutoMap(typeof(Order))] public class OrderEditModel { // members } [AutoMap(typeof(Order))] public class OrderCreateModel { // members } </code></pre> <p>When you perform <code>services.AddAutoMapper</code> or <code>cfg.AddMaps</code> for any profile scanning, all attribute maps will get pulled in as well.</p> <p>For most straightforward member configuration, we included attributes as well.</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=LU_BlZPtPu4:PraBTzdSbqg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=LU_BlZPtPu4:PraBTzdSbqg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=LU_BlZPtPu4:PraBTzdSbqg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=LU_BlZPtPu4:PraBTzdSbqg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=LU_BlZPtPu4:PraBTzdSbqg:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/LU_BlZPtPu4" height="1" width="1" alt=""/> Sharing Context in MediatR Pipelines https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ Jimmy Bogard urn:uuid:e648591a-ae5e-c010-af4a-a8bf70330708 Tue, 23 Apr 2019 15:56:28 +0000 <p>MediatR, a small library that implements the <a href="https://en.wikipedia.org/wiki/Mediator_pattern">Mediator pattern</a>, helps simplify scenarios when you want a simple in-memory request/response and notification implementation. Once you adopt its pattern, you'll often find many other related patterns start to show up - decorators, chains of responsibility, pattern matching, and more.</p> <p>Everything starts</p> <p>MediatR, a small library that implements the <a href="https://en.wikipedia.org/wiki/Mediator_pattern">Mediator pattern</a>, helps simplify scenarios when you want a simple in-memory request/response and notification implementation. Once you adopt its pattern, you'll often find many other related patterns start to show up - decorators, chains of responsibility, pattern matching, and more.</p> <p>Everything starts with a very basic implementation - a handler for a request:</p> <pre><code class="language-c#">public interface IRequestHandler&lt;in TRequest, TResponse&gt; where TRequest : IRequest&lt;TResponse&gt; { Task&lt;TResponse&gt; Handle(TRequest request, CancellationToken cancellationToken); } </code></pre> <p>Then on top of calling into a handler, we might want to call things <a href="https://lostechies.com/jimmybogard/2016/10/13/mediatr-pipeline-examples/">around our handler</a> for cross-cutting concerns, which led to <a href="https://github.com/jbogard/MediatR/wiki/Behaviors">behaviors</a>:</p> <pre><code class="language-c#">public interface IPipelineBehavior&lt;in TRequest, TResponse&gt; { Task&lt;TResponse&gt; Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate&lt;TResponse&gt; next); } </code></pre> <p>For simple scenarios, where I want to execute something just before or after a handler, MediatR includes a built-in pipeline behavior with additional pre/post-processors:</p> <pre><code class="language-c#">public interface IRequestPreProcessor&lt;in TRequest&gt; { Task Process(TRequest request, CancellationToken cancellationToken); } public interface IRequestPostProcessor&lt;in TRequest, in TResponse&gt; { Task Process(TRequest request, TResponse response, CancellationToken cancellationToken); } </code></pre> <p>Basically, I'm recreating a lot of existing functional patterns in an OO language, using dependency injection.</p> <blockquote> <p>Side note - it's possible to do functional patterns directly in C#, higher order functions, monads, partial application, currying, and more, but it's really, really ugly and not idiomatic C#.</p> </blockquote> <p>Inevitably however, it becomes necessary to share information <em>across</em> behaviors/processors. What are our options here? In ASP.NET Core, we have filters. For example, an action filter:</p> <pre><code class="language-c#">public interface IAsyncActionFilter : IFilterMetadata { Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next); } </code></pre> <p>This looks similar to our behavior, except with the first parameter. Our behaviors take simply a request object, while these filters have some sort of <code>context</code> object. Filters must explicitly use this context object for any kind of resolution of objects.</p> <p>For each of our behaviors, we have either our request object, dependency injection, or service location available for us.</p> <h3 id="requesthijacking">Request hijacking</h3> <p>One option available to us is to simply hijack the request object through some sort of base class:</p> <pre><code class="language-c#">public abstract class ContextualRequest&lt;TResponse&gt; : IRequest&lt;TResponse&gt; { public IDictionary&lt;string, object&gt; Items { get; } = new Dictionary&lt;string, object&gt;(); } </code></pre> <p>We use the request object as the object to place any shared context items, with any behavior then putting/removing items on it:</p> <pre><code class="language-c#">public class AuthBehavior&lt;TRequest, TResponse&gt; : IPipelineBehavior&lt;TRequest, TResponse&gt; where TRequest : ContextualRequest&lt;TResponse&gt; { private readonly IHttpContextAccessor _httpContextAccessor; public AuthBehavior(IHttpContextAccessor httpContextAccessor) =&gt; _httpContextAccessor = httpContextAccessor; public Task&lt;TResponse&gt; Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate&lt;TResponse&gt; next) { request.Items["CurrentUser"] = _httpContextAccessor.User; return next; } } </code></pre> <p>Here we're placing the current request user onto an items dictionary on the request object, so that subsequent behaviors/processors can then use that user object.</p> <p>It's a bit ugly, however, since we're forcing a base class into our request objects, breaking the concept of "favor composition over inheritance". At the time of writing, the built-in DI container doesn't support this kind of open generics constraint, so you'd be forced to adopt a 3rd-party container (literally any of them, they all support this).</p> <h3 id="servicelocation">Service Location</h3> <p>In places where you don't really need DI, you can instead use service location to pluck out the current user and do something with it:</p> <pre><code class="language-c#">public class AuthBehavior&lt;TRequest, TResponse&gt; : IPipelineBehavior&lt;TRequest, TResponse&gt; { public Task&lt;TResponse&gt; Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate&lt;TResponse&gt; next) { var user = (IPrincipal) HttpContext.Items["CurrentUser"]; if (!user.Principal.IsAuthenticated) return Task.FromResult&lt;TResponse&gt;(default); return next; } } </code></pre> <p>I'd not recommend this unless your system doesn't have much DI going on, such as in ASP.NET Classic.</p> <p>Finally, we can use dependency injection share information.</p> <h3 id="dependencyinjectingcontext">Dependency Injecting Context</h3> <p>Rather than hijacking our request (a crude form of partial application/currying), or service location, we can instead take advantage of dependency injection to inject a context object into any behavior/processor that needs it.</p> <p>First, we'll need to define our dependency. A dictionary is good, but having a concrete type is better:</p> <pre><code class="language-c#">public class ItemsCache : Dictionary&lt;string, object&gt; { } </code></pre> <p>Then we can register our special context object in the <code>ConfigureServices</code> method:</p> <pre><code class="language-c#">public void ConfigureServices(IServiceCollection services) { services.AddScoped&lt;ItemsCache&gt;(); </code></pre> <p>Now we can add items to our cache directly:</p> <pre><code class="language-c#">public class AuthBehavior&lt;TRequest, TResponse&gt; : IPipelineBehavior&lt;TRequest, TResponse&gt; where TRequest : ContextualRequest&lt;TResponse&gt; { private readonly IHttpContextAccessor _httpContextAccessor; private readonly ItemsCache _itemsCache; public AuthBehavior(IHttpContextAccessor httpContextAccessor, ItemsCache itemsCache) { _httpContextAccessor = httpContextAccessor; _itemsCache = itemsCache; } public Task&lt;TResponse&gt; Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate&lt;TResponse&gt; next) { _itemsCache["CurrentUser"] = _httpContextAccessor.User; return next; } } </code></pre> <p>Any behavior that wants to <em>use</em> or context object only needs to depend on it (composition over inheritance). We don't have to resort to funny generics business or base classes to do so. Nor do we need to modify our pipeline to have custom request objects (built-in or otherwise).</p> <p>If you find yourself needing to share information across components/services/behaviors/filters in a request, custom scoped dependencies are a great means of doing so.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=JXV91Ed-1rg:AjhhKsxq77E:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=JXV91Ed-1rg:AjhhKsxq77E:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=JXV91Ed-1rg:AjhhKsxq77E:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=JXV91Ed-1rg:AjhhKsxq77E:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=JXV91Ed-1rg:AjhhKsxq77E:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/JXV91Ed-1rg" height="1" width="1" alt=""/> The Curious Case of the JSON BOM https://jimmybogard.com/the-curious-case-of-the-json-bom/ Jimmy Bogard urn:uuid:c2d9e612-f86a-c2fd-4e1d-1df63ee9736d Fri, 05 Apr 2019 12:43:36 +0000 <p>Recently, I was testing some interop with Azure Service Bus, which has a rather useful feature when used with Azure Functions in that you can <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger---usage">directly bind JSON to an custom type</a>, to do something like:</p> <pre><code class="language-c#">[FunctionName("SaySomething")] public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]SaySomething command, ILogger</code></pre> <p>Recently, I was testing some interop with Azure Service Bus, which has a rather useful feature when used with Azure Functions in that you can <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger---usage">directly bind JSON to an custom type</a>, to do something like:</p> <pre><code class="language-c#">[FunctionName("SaySomething")] public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]SaySomething command, ILogger log) { log.LogInformation($"Incoming message: {command.Message}"); } </code></pre> <p>As long as we have valid JSON, everything should "just work". However, when I sent a JSON message to receive, I got a...rather unhelpful message:</p> <pre><code>[4/4/2019 2:50:35 PM] System.Private.CoreLib: Exception while executing function: SaySomething. Microsoft.Azure.WebJobs.Host: Exception binding parameter 'command'. Microsoft.Azure.WebJobs.ServiceBus: Binding parameters to complex objects (such as 'SaySomething') uses Json.NET serialization. 1. Bind the parameter type as 'string' instead of 'SaySomething' to get the raw values and avoid JSON deserialization, or 2. Change the queue payload to be valid json. The JSON parser failed: Unexpected character encountered while parsing value: ?. Path '', line 0, position 0. </code></pre> <p>Not too great! So what's going on here? Where did that parsed value come from? Why is the path/line/position tell us the beginning of the stream? The answer leads us down a path of encoding, RFC standards, and the .NET source code.</p> <h3 id="diagnosingtheproblem">Diagnosing the problem</h3> <p>Looking at the error message, it appears we can try to do exactly what it asks and bind to a string and just deserialize manually:</p> <pre><code class="language-c#">[FunctionName("SaySomething")] public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]string message, ILogger log) { var command = JsonConvert.DeserializeObject&lt;SaySomething&gt;(message); log.LogInformation($"Incoming message: {command.Message}"); } </code></pre> <p>Deserializing the message like this actually works, we can run with no problems! But still strange - why did it work with a <code>string</code>, but not automatically binding from the wire message?</p> <p>On the wire, the message payload is simply a byte array. So something could be happening reading the bytes into a string - which can be a bit "lossy" depending on the <a href="https://docs.microsoft.com/en-us/dotnet/standard/base-types/character-encoding">encoding used</a>. To fully understand what's going on, we need to understand how the text was <em>encoded</em> to see how it should be <em>decoded</em>. Clearly though the decoding of our string "fixes" the problem, but I don't see it as a viable solution.</p> <p>To dig further, let's drop down our messaging binding to its lowest form to get the raw bytes:</p> <pre><code class="language-c#">[FunctionName("SaySomething")] public static void Run([ServiceBusTrigger("Endpoints.SaySomething", Connection = "SbConnection")]byte[] message, ILogger log) { var value = Encoding.UTF8.GetString(message); var command = JsonConvert.DeserializeObject&lt;SaySomething&gt;(message); log.LogInformation($"Incoming message: {command.Message}"); } </code></pre> <p>Going this route, we get our original exception:</p> <pre><code>[4/4/2019 3:58:38 PM] System.Private.CoreLib: Exception while executing function: SaySomething. Newtonsoft.Json: Unexpected character encountered while parsing value: ?. Path '', line 0, position 0. </code></pre> <p>Something is clearly <em>different</em> between getting the <code>string</code> value through the Azure Functions/ServiceBus trigger binding, and going through <code>Encoding.UTF8</code>. To see what's different, let's look at that <code>value</code>:</p> <pre><code>{"Message":"Hello World"} </code></pre> <p>That looks fine! However, let's grab the raw bytes from the stream:</p> <pre><code>EFBBBF7B224D657373616765223A2248656C6C6F20576F726C64227D </code></pre> <p>And put that in a decoder:</p> <pre><code>{"Message":"Hello World"} </code></pre> <p>Well there's your problem! A bunch of junk characters at the beginning of the string. Where did those come from? A quick search of those characters reveals the culprit: our wire format included the <a href="https://en.wikipedia.org/wiki/Byte_order_mark#UTF-8">UTF8 Byte Order Mark</a> of <code>0xEF,0xBB,0xBF</code>. Whoops!</p> <h3 id="jsonbomd">JSON BOM'd</h3> <p>Having the UTF-8 BOM in our wire message messed some things up for us, but why should that matter? It turns out in the JSON RFC spec, having the <a href="https://tools.ietf.org/html/rfc8259#section-8.1">BOM in our string is forbidden</a> (emphasis mine):</p> <blockquote> <p>JSON text exchanged between systems that are not part of a closed ecosystem MUST be encoded using UTF-8 [RFC3629].</p> <p>Previous specifications of JSON have not required the use of UTF-8 when transmitting JSON text. However, the vast majority of JSON- based software implementations have chosen to use the UTF-8 encoding, to the extent that it is the only encoding that achieves interoperability.</p> <p><strong>Implementations MUST NOT add a byte order mark (U+FEFF) to the beginning of a networked-transmitted JSON text. In the interests of interoperability, implementations that parse JSON texts MAY ignore the presence of a byte order mark rather than treating it as an error.</strong> .</p> </blockquote> <p>Now that we've identified our culprit, why did our code sometimes succeed and sometimes fail? It turns out that we do really need to care about the encoding of our messages, and even when we think we pick sensible defaults, this may not be the case.</p> <p>Looking at the documentation for the <a href="https://docs.microsoft.com/en-us/dotnet/api/system.text.encoding.utf8?view=netframework-4.7.2#remarks"><code>Encoding.UTF8</code> property</a>, we see that the <code>Encoding</code> objects have two important toggles:</p> <ul> <li>Should it emit the UTF-8 BOM identifier?</li> <li>Should it throw for invalid bytes?</li> </ul> <p>We'll get to that second one here in a second, but something we can see from the documentation and <a href="https://github.com/dotnet/corefx/blob/master/src/Common/src/CoreLib/System/Text/UTF8Encoding.cs#L63">the code</a> is that <code>Encoding.UTF8</code> says "yes" for the first question and "no" for the second. However, if you use <code>Encoding.Default</code>, it's <a href="https://github.com/dotnet/corefx/blob/master/src/Common/src/CoreLib/System/Text/Encoding.cs#L77">different</a>! It will be "no" for the first question and "no" for the second.</p> <p>Herein lies our problem - the JSON spec says that the the encoded bytes <em>must not</em> include the BOM, but <em>may</em> ignore a BOM. Between "does" and "does not", our implementation went on the "does not" side of "may".</p> <p>We can't really affect the decoding of bytes to string or bytes to object in Azure Functions (or it's rather annoying to), but perhaps we can fix the problem in the first place - JSON originally encoded with a BOM.</p> <p>When debugging, I noticed that <code>Encoding.UTF8.GetBytes()</code> did not return any BOM, but clearly I'm getting one here. So what's going on? It gets even muddier when we start to introduce streams.</p> <h3 id="crossingstreams">Crossing streams</h3> <p>Typically, when dealing with I/O, you're dealing with a <a href="https://docs.microsoft.com/en-us/dotnet/api/system.io.stream?view=netframework-4.7.2">Stream</a>. And typically again, if you're writing a stream, you're dealing with a <a href="https://docs.microsoft.com/en-us/dotnet/api/system.io.streamwriter?view=netframework-4.7.2#remarks">StreamWriter</a> whose default behavior is UTF-8 encoding <em>without</em> a BOM. The comments are interesting here, as it says:</p> <pre><code class="language-c#">// The high level goal is to be tolerant of encoding errors when we read and very strict // when we write. Hence, default StreamWriter encoding will throw on encoding error. </code></pre> <p>So StreamWriter is "no-BOM, throw on error" but <a href="https://github.com/dotnet/corefx/blob/master/src/Common/src/CoreLib/System/IO/StreamReader.cs#L103">StreamReader</a> is <code>Encoding.UTF8</code>, which is "yes for BOM, no for throwing error". Each option is opposite the other!</p> <p>If we're using a vanilla <code>StreamWriter</code>, we still shouldn't have a BOM. Ah, but we aren't! I was using <a href="https://particular.net/nservicebus">NServiceBus</a> to generate the message (I'm lazy that way) and its <a href="https://github.com/Particular/NServiceBus.Newtonsoft.Json">Newtonsoft.Json</a> serializer to generate the message bytes. Looking underneath the covers, we see the default reader and writer explicitly pass in <code>Encoding.UTF8</code> for both <a href="https://github.com/Particular/NServiceBus.Newtonsoft.Json/blob/develop/src/NServiceBus.Newtonsoft.Json/JsonMessageSerializer.cs#L45">reading</a> and <a href="https://github.com/Particular/NServiceBus.Newtonsoft.Json/blob/develop/src/NServiceBus.Newtonsoft.Json/JsonMessageSerializer.cs#L36">writing</a>. This is very likely not what we want for writing, since the default behavior of <code>Encoding.UTF8</code> is to include a BOM.</p> <p>The quick fix is to swap out the encoding with something that's a better default here in our NServiceBus setup configuration:</p> <pre><code class="language-c#">var serialization = endpointConfiguration.UseSerialization&lt;NewtonsoftSerializer&gt;(); serialization.WriterCreator(s =&gt; { var streamWriter = new StreamWriter(s, new UTF8Encoding(false)); return new JsonTextWriter(streamWriter); }); </code></pre> <p>We have a number of options here, such as just using the default <code>StreamWriter</code> but in my case I'd rather be very explicit about what options I want to use.</p> <p>The longer fix is a <a href="https://github.com/Particular/NServiceBus.Newtonsoft.Json/pull/54">pull request to patch this behavior</a> so that the default writer will not emit the BOM (but will need a bit of testing since technically this changes the wire format).</p> <p>So the moral of the story - if you see weird characters like <code></code> showing up in your text, enjoy a couple of days digging in to character encoding and making <a href="https://twitter.com/jbogard/status/1111328911609217025">really bad jokes</a>.</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/3/2019/Picture0063.png" alt=""></p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=5PrKsf6xWzY:qF8txO_8QLA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=5PrKsf6xWzY:qF8txO_8QLA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=5PrKsf6xWzY:qF8txO_8QLA:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=5PrKsf6xWzY:qF8txO_8QLA:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=5PrKsf6xWzY:qF8txO_8QLA:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/5PrKsf6xWzY" height="1" width="1" alt=""/> AutoMapper's Design Philosophy https://jimmybogard.com/automappers-design-philosophy/ Jimmy Bogard urn:uuid:bbaf1f4d-8dd9-58d9-f9f6-08e10b9c6d54 Mon, 25 Mar 2019 20:54:58 +0000 <p>While a lot of people use AutoMapper, and love it, I meet just as many people that <em>hate</em> it. When I hear their stories, it becomes clear to me that it's not that AutoMapper was "abused" per se, but that it was used without understanding why AutoMapper exists and what</p> <p>While a lot of people use AutoMapper, and love it, I meet just as many people that <em>hate</em> it. When I hear their stories, it becomes clear to me that it's not that AutoMapper was "abused" per se, but that it was used without understanding why AutoMapper exists and what problems it was designed to solve.</p> <p>AutoMapper originated at the beginning of a large MVC application way back in the early days on MVC. Back then, there really wasn't any guidance about what exactly the "M" in "MVC" should be. Most MVC frameworks have a strong concept of a model - in Rails, Django, and many others, the M is a first-class citizen. The joke in ASP.NET MVC is the "M" is silent.</p> <p>So we adopted the name "view model" to describe <a href="https://lostechies.com/jimmybogard/2009/06/30/how-we-do-mvc-view-models/">our models in MVC</a> - these were models specifically designed for a view.</p> <p>We started this long-running project with a few rules for our view models:</p> <ul> <li>Each view model would be designed for one and only one view</li> <li>Only the information needed to render or model bind is contained on the view model</li> </ul> <p>With these rules in place, we started building screens. A few dozen screens in, we started to notice a problem.</p> <h3 id="bespokemodels">Bespoke models</h3> <p>As we built screens, we needed to build out our view model types. We knew we wanted to create view models per screen, but what past that? How should we name the type? How should we name the members?</p> <p>We found that nearly all of our screens were just subsets of data from a richer model. We had a lot of boring assignment code:</p> <pre><code class="language-c#">var order = dbContext.FindById(id); var orderDto = new OrderDto { Id = order.Id, CustomerName = order.Customer.FullName, LineItems = order.LineItems.Select(li =&gt; new OrderDto.OrderLineItem { Id = li.Id, ProductName = li.Product.ShortName, Description = li.Product.Description, Count = li.Count, Price = li.ItemPrice }).ToList(); }; </code></pre> <p>I noticed a couple of things:</p> <ul> <li>DTO names were arbitrary. Sometimes we called it "model", sometimes "Dto"</li> <li>Member names were shortened/abbreviated arbitrarily</li> </ul> <p>There wasn't any rhyme or reason behind this, it was whatever the developer decided to do. On top of this, we would get null reference exceptions fairly frequently - when there was missing data for whatever reason, our simple assignment expressions would blow up (this was also before the <a href="https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/operators/null-conditional-operators">null conditional operator</a>).</p> <p>On top of this, we would have to unit test all of this - making sure all of the properties were populated appropriately, and nothing was missing.</p> <p>Putting these two together, and it was going to be a recipe for disaster for an app that would eventually have nearly 1000 screens.</p> <h3 id="enterautomapper">Enter AutoMapper</h3> <p>The architect showed me all this (I was tech lead on the project at the time) and said "fix it". The main problems as I saw it were:</p> <ul> <li>The view models were all subtly, but pointlessly, different</li> <li>The code to handle sparse models or missing data was error-prone and often missed</li> <li>The tests for all those assignments was easy to get wrong</li> </ul> <p>Was there any business value in having a property named "Price" instead of "ItemPrice"? And since we worked so hard on the original model names, adhering to the ubiquitous language of our broader team, why was it then OK for the developer to take shortcuts in the view model design?</p> <p>With this in front of me, I set out to build a tool that:</p> <ul> <li>Enforced a convention for destination types</li> <li>Removed all those null reference exceptions</li> <li>Made it super easy to test</li> </ul> <p>And thus AutoMapper was born.</p> <h3 id="automappersdesignphilosophy">AutoMapper's Design Philosophy</h3> <p>AutoMapper works because it enforces a convention. It assumes that your destination types are a subset of the source type. It assumes that everything on your destination type is meant to be mapped. It assumes that the destination member names follow the exact name of the source type. It assumes that you want to flatten complex models into simple ones.</p> <p>All of these assumptions come from our original use case - view models for MVC, where all of those assumptions are in line with our view model design. With AutoMapper, we could <em>enforce</em> our view model design philosophy. This is the true power of conventions - laying down a set of enforceable design rules that help you streamline development along the way.</p> <p>By enforcing conventions, we let our developers focus on the value add activities, and less on the activities that provided zero or negative value, like designing bespoke view models or writing a thousand dumb unit tests.</p> <p>And this is why our usage of AutoMapper has stayed so steady over the years - because our design philosophy for view models hasn't changed. If you find yourself hating a tool, it's important to ask - for what problems was this tool designed to solve? And if those problems are different than yours, perhaps that tool isn't a good fit.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=ez-wn4X72NU:Wy0zpOdopYM:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=ez-wn4X72NU:Wy0zpOdopYM:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=ez-wn4X72NU:Wy0zpOdopYM:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=ez-wn4X72NU:Wy0zpOdopYM:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=ez-wn4X72NU:Wy0zpOdopYM:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/ez-wn4X72NU" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Conclusion https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-conclusion/ Jimmy Bogard urn:uuid:7c9337f4-5699-b0f9-7307-e30de4ae1956 Thu, 21 Mar 2019 14:14:16 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-sagas/">Sagas</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-relational-resources/">Relational Resources</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-conclusion/">Conclusion</a></li> </ul> <p>We started out with a common, but nearly always overlooked problem: how do we reliably coordinate activities between different transactional resources? The question we need to ask first is</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-sagas/">Sagas</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-relational-resources/">Relational Resources</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-conclusion/">Conclusion</a></li> </ul> <p>We started out with a common, but nearly always overlooked problem: how do we reliably coordinate activities between different transactional resources? The question we need to ask first is - do we <em>need</em> to coordinate these activities?</p> <p>Ultimately, it's a business decision about how to deal with the messiness of a distributed world. We have resources we expect to behave transactionally (a single SQL database), resources we might <em>assume</em> to behave transactionally (multiple writes to a document database), and resources we should never assume to behave transactionally (disparate resources, such as a database and a queue). We have a responsibility to help inform and instruct the business on benefits and drawbacks, risks and upsides to each approach.</p> <p>NoSQL databases themselves are <a href="https://docs.mongodb.com/manual/core/write-operations-atomicity/">moving</a> <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/database-transactions-optimistic-concurrency#multi-item-transactions">towards</a> allowing greater consistency guarantees, both within a single node and multi-node. The onus is still on the developer to understand the transactional guarantees and options available, so that our system still behaves as expected.</p> <p>"Solving" the transactional challenge against multiple resources ultimately involves a variety of patterns and techniques, the end result being naturally more complex than when we started. This is natural, however. Resource coordination is a feature, naturally requiring more code to address. We can reduce the intrusion of infrastructure concerns into our business logic by using patterns like domain events, aggregates, and outbox.</p> <p>Because addressing the transactional challenge ultimately affects the user experience, especially since partial success/failure is a distinct possibility, we can't just independently decide what the "right" solution is. From experience, I can guarantee that the "right" solution isn't ignorance - things do fail in production.</p> <p>So what is the "right" solution? It all comes down to the consultant's motto:</p> <p><strong>It Depends</strong></p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=MGj_YoydNjs:nQDXvGQSIvk:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=MGj_YoydNjs:nQDXvGQSIvk:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=MGj_YoydNjs:nQDXvGQSIvk:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=MGj_YoydNjs:nQDXvGQSIvk:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=MGj_YoydNjs:nQDXvGQSIvk:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/MGj_YoydNjs" height="1" width="1" alt=""/> AutoMapper Usage Guidelines https://jimmybogard.com/automapper-usage-guidelines/ Jimmy Bogard urn:uuid:45ed24e5-ef92-0d9c-eb4c-efc67e8da295 Tue, 26 Feb 2019 14:50:34 +0000 <h3 id="configuration">Configuration</h3> <p><strong>√ DO</strong> initialize AutoMapper once with <code>Mapper.Initialize</code> at AppDomain startup in legacy ASP.NET</p> <p><strong>√ DO</strong> use the <a href="https://www.nuget.org/packages/automapper.extensions.microsoft.dependencyinjection/">AutoMapper.Extensions.Microsoft.DependencyInjection</a> package in ASP.NET Core with <code>services.AddAutoMapper(assembly[])</code></p> <p><strong>X DO NOT</strong> call <code>CreateMap</code> on each request</p> <p><strong>X DO NOT</strong> use <a href="https://docs.automapper.org/en/stable/Inline-Mapping.html">inline maps</a></p> <p><strong>√ DO</strong> organize configuration into <a href="https://docs.automapper.org/en/stable/Configuration.html#profile-instances">profiles</a></p> <h3 id="configuration">Configuration</h3> <p><strong>√ DO</strong> initialize AutoMapper once with <code>Mapper.Initialize</code> at AppDomain startup in legacy ASP.NET</p> <p><strong>√ DO</strong> use the <a href="https://www.nuget.org/packages/automapper.extensions.microsoft.dependencyinjection/">AutoMapper.Extensions.Microsoft.DependencyInjection</a> package in ASP.NET Core with <code>services.AddAutoMapper(assembly[])</code></p> <p><strong>X DO NOT</strong> call <code>CreateMap</code> on each request</p> <p><strong>X DO NOT</strong> use <a href="https://docs.automapper.org/en/stable/Inline-Mapping.html">inline maps</a></p> <p><strong>√ DO</strong> organize configuration into <a href="https://docs.automapper.org/en/stable/Configuration.html#profile-instances">profiles</a></p> <p><strong>√ CONSIDER</strong> organizing profile classes close to the destination types they configure</p> <p><strong>X DO NOT</strong> access the static Mapper class inside a profile</p> <p><strong>X DO NOT</strong> use a DI container to register all profiles</p> <p><strong>X DO NOT</strong> inject dependencies into profiles</p> <p><strong>√ CONSIDER</strong> using <a href="https://docs.automapper.org/en/stable/Queryable-Extensions.html#supported-mapping-options">configuration options supported by LINQ</a> over options not supported by LINQ</p> <p><strong>X AVOID</strong> <a href="https://docs.automapper.org/en/stable/Before-and-after-map-actions.html">before/after map configuration</a></p> <p><strong>X AVOID</strong> <a href="https://docs.automapper.org/en/stable/Reverse-Mapping-and-Unflattening.html">ReverseMap</a> in cases except when mapping only top-level, non-flattened properties</p> <p><strong>X DO NOT</strong> put any logic that is not strictly mapping behavior into configuration</p> <p><strong>X DO NOT</strong> use MapFrom when the destination member can already be auto-mapped</p> <p><strong>X DO NOT</strong> use AutoMapper except in cases where the destination type is a flattened subset of properties of the source type</p> <p><strong>X DO NOT</strong> use AutoMapper to support a complex layered architecture</p> <p><strong>X AVOID</strong> using AutoMapper when you have a significant percentage of custom configuration in the form of <code>Ignore</code> or <code>MapFrom</code></p> <h3 id="modeling">Modeling</h3> <p><strong>√ DO</strong> flatten DTOs</p> <p><strong>X AVOID</strong> sharing DTOs across multiple maps</p> <p><strong>√ DO</strong> create inner types in DTOs for member types that cannot be flattened</p> <p><strong>X DO NOT</strong> create DTOs with circular associations</p> <p><strong>X AVOID</strong> changing DTO member names to control serialization</p> <p><strong>√ DO</strong> put common simple computed properties into the source model</p> <p><strong>√ DO</strong> put computed properties specific to a destination model into the destination model</p> <h3 id="execution">Execution</h3> <p><strong>√ CONSIDER</strong> using <a href="https://docs.automapper.org/en/stable/Queryable-Extensions.html">query projection</a> (<code>ProjectTo</code>) over in-memory mapping</p> <p><strong>X DO NOT</strong> abstract or encapsulate mapping behind an interface</p> <p><strong>√ DO</strong> use mapping options for runtime-resolved values in projections</p> <p><strong>√ DO</strong> use mapping options for resolving contextualized services in in-memory mapping</p> <p><strong>X DO NOT</strong> complain about AutoMapper if you are not following the usage guidelines</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=8pES-N3Ntos:RT0jnJjDFl8:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=8pES-N3Ntos:RT0jnJjDFl8:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=8pES-N3Ntos:RT0jnJjDFl8:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=8pES-N3Ntos:RT0jnJjDFl8:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=8pES-N3Ntos:RT0jnJjDFl8:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/8pES-N3Ntos" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Relational Resources https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-relational-resources/ Jimmy Bogard urn:uuid:df98811d-1443-77ea-d1f0-2889e8aba654 Tue, 05 Feb 2019 20:46:04 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-sagas/">Sagas</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-relational-resources/">Relational Resources</a></li> </ul> <p><a href="https://github.com/jbogard/adventureworkscosmos">Sample code from this series</a></p> <p>So far in this series we've mainly concerned ourselves with a single resource that can't support distributed (or multi-entity) transactions. While that is becoming less</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-sagas/">Sagas</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-relational-resources/">Relational Resources</a></li> </ul> <p><a href="https://github.com/jbogard/adventureworkscosmos">Sample code from this series</a></p> <p>So far in this series we've mainly concerned ourselves with a single resource that can't support distributed (or multi-entity) transactions. While that is becoming less common as NoSQL options, as Azure CosmosDB supports them, and with the 4.0 release, MongoDB now supports multi-document transactions.</p> <p>What does this mean for us then?</p> <p>Firstly, there are many cases that even though transactions between disparate entities is possible, it may not be desired. We may have to make design or performance compromises to make it work, as the technology to perform multi-entity transactions will always add some overhead. Even in SQL-land, we still often need to worry about consistency, locks, concurrency, etc.</p> <p>The scenario that is most often overlooked I've found is bridging disparate resources, not multiple entities in the same resource (such as with many NoSQL databases). I've blogged about <a href="https://jimmybogard.com/refactoring-towards-resilience-a-primer/">resilience patterns</a>, especially crossing incorporating calling APIs to external sources. Most typically though, I see people trying to write to a database and send a message to a queue:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2019/Picture0058.png" alt=""></p> <p>The code is rather innocuous - we're trying to do some stuff in a database, which IS transactional across multiple writes, but at the same time, try to write to a queue/broker/exchange like RabbitMQ/Azure Service Bus.</p> <p>While it may be easy/obvious to spot places where we're trying to incorporate external services into our business transactions, like APIs, it's not so obvious when it's infrastructure we <em>do</em> own and <em>is</em> transactional.</p> <p>Ultimately, our solution will be the same, which is to apply the outbox pattern.</p> <h3 id="outboxpatterngeneralized">Outbox pattern generalized</h3> <p>With the Cosmos DB approach, we placed the outbox inside each individual document. This is because our transactional boundary is a single item in a single collection. In order to provide atomicity for "publishing" messages, we need to design our outbox to the same transactional scope as our business data. If the transactional scope is a single record, the outbox goes in that record. If the scope is the database, the outbox goes in the database.</p> <p>With a SQL database, our transaction scope widens considerably - to the entire database! This also widens the possibilities of how we can model our "outbox". We don't have to store an outbox per record in SQL Server - we can instead create a single outbox for the entire database.</p> <p>What should be in this outbox table? Something similar to our original outbox in the Cosmos DB example:</p> <ul> <li>Message ID (which message is this)</li> <li>Type (what kind of message is this)</li> <li>Body (what's in the message)</li> </ul> <p>We also need to keep track of what's been dispatched or not. In the CosmosDB example, we keep sent/received messages inside each record.</p> <p>With SQL though, having a single table that grows without bound is...problematic, so we need some more information about whether or not a message has been dispatched, as well as a dispatch date to be able to clean up after ourselves.</p> <p>We can combine both into a single value however - "DispatchedAt", as a timestamp:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2019/Picture0059.png" alt=""></p> <p>Processing the outbox will be a bit different than our original Cosmos DB example, since we now have a single table to deal with.</p> <h3 id="sendingviaoursqloutbox">Sending via our SQL outbox</h3> <p>In the Cosmos DB example, we used domain events to communicate. We can do similar in our SQL example, but we'll more or less need to draw some boundaries. Are our domain events just to coordinate activities in our domain model, or are they there to interact with the outside world as service-level integration events? Previously, we only used them for domain events, and used special domain event handlers to "republish" as service events.</p> <p>There are lots of tradeoffs between each approach, but in general, it's best not to combine the idea of domain events and service events.</p> <p>To keep things flexible, we can simply model our messages as plainly as possible, to the point of just including them on the <code>DbContext</code>:</p> <pre><code class="language-c#">public Task SaveMessageAsync&lt;T&gt;(T message) { var outboxMessage = new OutboxMessage { Id = Guid.NewGuid(), Type = typeof(T).FullName, Body = JsonConvert.SerializeObject(message) }; return Outbox.AddAsync(outboxMessage); } </code></pre> <p>When we're doing something that needs any kind of messaging, instead of sending directly to a broker, we save a new message as part of a transaction:</p> <pre><code class="language-c#">using (var context = new AdventureWorks2016Context()) { using (var transaction = context.Database.BeginTransaction()) { var product = await context.Products.FindAsync(productId); product.Price = newPrice; await context.SaveMessageAsync(new ProductPriceChanged { Id = productId, Price = newPrice }); await context.SaveChangesAsync(); transaction.Commit(); } } </code></pre> <p>Each request that needs to send a message does so by only writing to the business data and outbox in a single transaction:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2019/Picture0060.png" alt=""></p> <p>Then, as with our Cosmos DB example, a dispatcher reads from the outbox and sends these messages along to our broker:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2019/Picture0061.png" alt=""></p> <p>And also like our Cosmos DB example, our dispatcher can run right after the transaction completes, as well as a background process to "clean up" after any missed messages. After each dispatch, we'd set the date of dispatch to ensure we skip already dispatched messages:</p> <pre><code class="language-c#">using (var context = new AdventureWorks2016Context()) { using (var transaction = context.Database.BeginTransaction()) { var message = await context.Outbox .Where(m =&gt; m.DispatchedAt == null) .FirstOrDefaultAsync(); bus.Send(message); message.DispatchedAt = DateTime.Now; await context.SaveChangesAsync(); transaction.Commit(); } } </code></pre> <p>With this in place, we can safely "send" messages in a transaction. However, we still have to deal with receiving these messages twice!</p> <h3 id="deduplicatingmessages">De-duplicating messages</h3> <p>In order to make sure we achieve at-least-once delivery, but exactly-once processing, we'll have to keep track of messages we've processed. To do so, we can just add a little extra information to our outbox - not just when we've <em>sent</em> messages, but when we've <em>processed</em> messages:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2019/Picture0062.png" alt=""></p> <p>Similar to our inbox, we'll include the processed date with each message. As we process a message, we'll double-check our outbox to see if it's already processed before. If not, we can perform the work. If so, we just skip the message - nothing to be done!</p> <p>With these measures in place, the last piece is to decide how long we should keep track of messages in our outbox. What's the longest amount of time a message can be marked as processed that we might receive the message again? An hour? A day? A week? Probably not a year, but something that makes sense. I'd start large, say a week, and move it lower as we understand the characteristics of our system.</p> <p>With the outbox pattern, we can still coordinate activities between transactional resources by keeping track of what we need to communicate, when we've communicated, and when we've processed. It's like a little to-do list that our system uses to check things off as it goes, never losing track of what it needs to do.</p> <p>In our last post, I'll wrap up and cover some scenarios where we should avoid such a level of coordination.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=HFO_6KiI2CE:OF8BSGJkRas:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=HFO_6KiI2CE:OF8BSGJkRas:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=HFO_6KiI2CE:OF8BSGJkRas:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=HFO_6KiI2CE:OF8BSGJkRas:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=HFO_6KiI2CE:OF8BSGJkRas:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/HFO_6KiI2CE" height="1" width="1" alt=""/> Gocode Vim Plugin and Go Modules https://blog.jasonmeridth.com/posts/gocode-vim-plugin-and-go-modules/ Jason Meridth urn:uuid:c9be1149-395b-e365-707e-8fa2f475093c Sat, 05 Jan 2019 17:09:26 +0000 <p>I recently purchased <a href="https://lets-go.alexedwards.net/">Let’s Go</a> from Alex Edwards. I wanted an end-to-end Golang website tutorial. It has been great. Lots learned.</p> <p>Unfortunately, he is using Go’s modules and the version of the gocode Vim plugin I was using did not support Go modules.</p> <h3 id="solution">Solution:</h3> <p>Use <a href="https://github.com/stamblerre/gocode">this fork</a> of the gocode Vim plugin and you’ll get support for Go modules.</p> <p>I use <a href="https://github.com/junegunn/vim-plug">Vim Plug</a> for my Vim plugins. Huge fan of Vundle but I like the post-actions feature of Plug. I just had to change one line of my vimrc and re-run updates.</p> <div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gh">diff --git a/vimrc b/vimrc index 3e8edf1..8395705 100644 </span><span class="gd">--- a/vimrc </span><span class="gi">+++ b/vimrc </span><span class="gu">@@ -73,7 +73,7 @@ endif </span> let editor_name='nvim' Plug 'zchee/deoplete-go', { 'do': 'make'} endif <span class="gd">- Plug 'nsf/gocode', { 'rtp': 'vim', 'do': '~/.config/nvim/plugged/gocode/vim/symlink.sh' } </span><span class="gi">+ Plug 'stamblerre/gocode', { 'rtp': 'vim', 'do': '~/.vim/plugged/gocode/vim/symlink.sh' } </span> Plug 'godoctor/godoctor.vim', {'for': 'go'} " Gocode refactoring tool " } </code></pre></div></div> <p>That is the line I had to change then run <code class="highlighter-rouge">:PlugUpdate!</code> and the new plugin was installed.</p> <p>I figured all of this out due to <a href="https://github.com/zchee/deoplete-go/issues/134#issuecomment-435436305">this comment</a> by <a href="https://github.com/cippaciong">Tommaso Sardelli</a> on Github. Thank you Tommaso.</p> Raspberry Pi Kubernetes Cluster - Part 4 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4/ Jason Meridth urn:uuid:56f4fdcb-5310-bbaa-c7cf-d34ef7af7682 Fri, 28 Dec 2018 16:35:23 +0000 <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1">Raspberry Pi Kubenetes Cluster - Part 1</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2">Raspberry Pi Kubenetes Cluster - Part 2</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3">Raspberry Pi Kubenetes Cluster - Part 3</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4">Raspberry Pi Kubenetes Cluster - Part 4</a></p> <p>Howdy again.</p> <p>In this post I’m going to show you how to create a docker image to run on ARM architecture and also how to deploy it and view it.</p> <p>To start please view my basic flask application called fl8 <a href="https://github.com/meridth/fl8">here</a></p> <p>If you’d like to clone and use it:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git@github.com:meridth/fl8.git <span class="o">&amp;&amp;</span> <span class="nb">cd </span>fl8 </code></pre></div></div> <h1 id="arm-docker-image">ARM docker image</h1> <p>First we need to learn about QEMU</p> <h3 id="what-is-qemu-and-qemu-installation">What is QEMU and QEMU installation</h3> <p>QEMU (Quick EMUlator) is an Open-Source hosted hypervisor, i.e. an hypervisor running on a OS just as other computer programs, which performs hardware virtualization. QEMU emulates CPUs of several architectures, e.g. x86, PPC, ARM and SPARC. It allows the execution of non-native target executables emulating the native execution and, as we require in this case, the cross-building process.</p> <h3 id="base-docker-image-that-includes-qemu">Base Docker image that includes QEMU</h3> <p>Please open the <code class="highlighter-rouge">Dockerfile.arm</code> and notice the first line: <code class="highlighter-rouge">FROM hypriot/rpi-alpine</code>. This is a base image that includes the target qemu statically linked executable, <em>qemu-arm-static</em> in this case. I chose <code class="highlighter-rouge">hypriot/rpi-alpine</code> because the alpine base images are much smaller than other base images.</p> <h3 id="register-qemu-in-the-build-agent">Register QEMU in the build agent</h3> <p>To add QEMU in the build agent there is a specific Docker Image performing what we need, so just run in your command line:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--rm</span> <span class="nt">--privileged</span> multiarch/qemu-user-static:register <span class="nt">--reset</span> </code></pre></div></div> <h3 id="build-image">Build image</h3> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">-f</span> ./Dockerfile.arm <span class="nt">-t</span> meridth/rpi-fl8 <span class="nb">.</span> </code></pre></div></div> <p>And voila! You now have an image that will run on Raspberry Pis.</p> <h1 id="deployment-and-service">Deployment and Service</h1> <p><code class="highlighter-rouge">/.run-rpi.sh</code> is my script where I run a Kubernetes deployment with 3 replicas and a Kubernetes service. Please read <code class="highlighter-rouge">fl8-rpi-deployment.yml</code> and <code class="highlighter-rouge">fl8-rpi-service.yml</code>. They are only different from the other deployment and service files by labels. Labels are key/vaule pairs that can be used by selectors later.</p> <p>The deployment will pull my image from <code class="highlighter-rouge">meridth/rpi-fl8</code> on dockerhub. If you have uploaded your docker image somewhere you can change the deployment file to pull that image instead.</p> <h1 id="viewing-application">Viewing application</h1> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get pods </code></pre></div></div> <p>Choose a pod to create the port forwarding ssh tunnel.</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl port-forward <span class="o">[</span>pod-name] <span class="o">[</span>app-port]:[app-port] </code></pre></div></div> <p>Example: <code class="highlighter-rouge">kubectl port-forward rpi-fl8-5d84dd8ff6-d9tgz 5010:5010</code></p> <p>The final result when you go to <code class="highlighter-rouge">http://localhost:5010</code> in a browser.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/port_forward.png" alt="port forward result" /></p> <p>Hope this helps someone else. Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4/">Raspberry Pi Kubernetes Cluster - Part 4</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on December 28, 2018.</p> Raspberry Pi Kubernetes Cluster - Part 3 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3/ Jason Meridth urn:uuid:c12fa6c5-8e7a-6c5d-af84-3c0452cf4ae4 Mon, 24 Dec 2018 21:59:23 +0000 <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1">Raspberry Pi Kubenetes Cluster - Part 1</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2">Raspberry Pi Kubenetes Cluster - Part 2</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3">Raspberry Pi Kubenetes Cluster - Part 3</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4">Raspberry Pi Kubenetes Cluster - Part 4</a></p> <p>Well, it took me long enough to follow up on my previous posts. There are reasons.</p> <ol> <li>The day job has been fun and busy</li> <li>Family life has been fun and busy</li> <li>I kept hitting annoying errors when trying to get my cluster up and running</li> </ol> <p>The first two reasons are the usual reasons a person doesn’t blog. :)</p> <p>The last one is what prevented me from blogging sooner. I had mutliple issues when trying to use <a href="https://rak8s.io">rak8s</a> to setup my cluster. I’m a big fan of <a href="https://ansible.com">Ansible</a> and I do not like running scripts over and over. I did read <a href="https://gist.github.com/alexellis/fdbc90de7691a1b9edb545c17da2d975">K8S on Raspbian Lite</a> from top to bottom and realized automation would make this much better.</p> <!--more--> <h3 id="the-issues-i-experienced">The issues I experienced:</h3> <h4 id="apt-get-update-would-not-work">apt-get update would not work</h4> <p>I started with the vanilla Raspbian lite image to run on my nodes and had MANY MANY issues with running <code class="highlighter-rouge">apt-get update</code> and <code class="highlighter-rouge">apt-get upgrade</code>. The mirrors would disconnect often and just stall. This doesn’t help my attempted usage of rak8s which does both on the <code class="highlighter-rouge">cluster.yml</code> run (which I’ll talk about later).</p> <h4 id="rak8s-changes-needed-to-run-hypriotos-and-kubernetes-1131">rak8s changes needed to run HypriotOS and kubernetes 1.13.1</h4> <p>Clone the repo locally and I’ll walk you through what I changed to get <a href="https://rak8s.io">rak8s</a> working for me and HypriotOS.</p> <p>Change the following files:</p> <ul> <li><code class="highlighter-rouge">ansible.cfg</code> <ul> <li>change user from <code class="highlighter-rouge">pi</code> to <code class="highlighter-rouge">pirate</code></li> </ul> </li> <li><code class="highlighter-rouge">roles/kubeadm/tasks/main.yml</code> <ul> <li>add <code class="highlighter-rouge">ignore_errors: True</code> to <code class="highlighter-rouge">Disable Swap</code> task</li> <li>I have an open PR for this <a href="https://github.com/rak8s/rak8s/pull/46">here</a></li> </ul> </li> <li><code class="highlighter-rouge">group_vars/all.yml</code> <ul> <li>Change <code class="highlighter-rouge">kubernetes_package_version</code> to <code class="highlighter-rouge">"1.13.1-00"</code></li> <li>Change <code class="highlighter-rouge">kubernetes_version</code> to <code class="highlighter-rouge">"v1.13.1"</code></li> </ul> </li> </ul> <p>After you make those changes you can run <code class="highlighter-rouge">ansible-playbook cluster.yml</code> as the rak8s documentation suggests. Please note this is after you edit <code class="highlighter-rouge">inventory</code> and copy <code class="highlighter-rouge">ssh</code> keys to raspberry pis.</p> <h4 id="flannel-networking-issue-once-nodes-are-up">Flannel networking issue once nodes are up</h4> <p>After I get all of the nodes up I noticed the master node was marked ast <code class="highlighter-rouge">NotReady</code> and when I ran <code class="highlighter-rouge">kubectl describe node raks8000</code> I saw the following error:</p> <blockquote> <p>KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized</p> </blockquote> <p>This error is known in kubernetes &gt; 1.12 and flannel v0.10.0. It is mentioned in <a href="https://github.com/coreos/flannel/issues/1044">this issue</a>. The fix is specifically mentioned <a href="https://github.com/coreos/flannel/issues/1044#issuecomment-427247749">here</a>. It is to run the following command</p> <p><code class="highlighter-rouge">kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</code></p> <p>After readin the issue it seems the fix will be in the next version of flannel and will be backported to v0.10.0.</p> <h1 id="a-running-cluster">A running cluster</h1> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/running_cluster.png" alt="Running Cluster" /></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3/">Raspberry Pi Kubernetes Cluster - Part 3</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on December 24, 2018.</p> MVP how minimal https://lostechies.com/ryansvihla/2018/12/20/mvp-how-minimal/ Los Techies urn:uuid:3afadd9e-98a7-8d37-b797-5403312a2999 Thu, 20 Dec 2018 20:00:00 +0000 MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP: <p>MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP:</p> <ul> <li>Mega Minimal: website and db, mostly manual on the backend</li> <li>Mega Mega: provisioning system, dynamic tuning of systems via ML, automated operations, monitoring a few others I’m leaving out.</li> </ul> <h2 id="feedback">Feedback</h2> <p>If we’re evaluating which approach gives us more feedback, Mega Minimal MVP is gonna win hands down here. Some will counter they don’t want to give people a bad impression with a limited product and that’s fair, but it’s better than no impression (the dreaded never shipped MVP). The Mega Mega MVP I referenced took months to demo. only had one of those checkboxes setup and wasn’t ever demod again. So we can categorical say that failed at getting any feedback.</p> <p>Whereas the Mega Minimal MVP, got enough feedback and users for the founders to realize that wasn’t a business for them. Better than after hiring a huge team and sinking a million plus into dev efforts for sure. Not the happy ending I’m sure you all were expecting, but I view that as mission accomplished.</p> <h2 id="core-value">Core Value</h2> <ul> <li>Mega Minimal, they only focused on a single feature, executed well enough that people gave them some positive feedback, but not enough to justify automating everything.</li> <li>Mega Mega. I’m not sure anyone who talked about the product saw the same core value, and there were several rewrites and shifts along the way.</li> </ul> <p>Advantage Mega Minimal again</p> <h2 id="what-about-entrants-into-a-crowded-field">What about entrants into a crowded field</h2> <p>Well that is harder and the MVP tends to be less minimal, because the baseline expectations are just much higher. I still lean towards Mega Minimal having a better chance at getting users, since there is a non zero chance the Mega Mega MVP will never get finished. I still think the exercise in focusing on core value that makes your product <em>not</em> a me too, and even considering how you can find a niche in a crowded field instead of just being “better”, and your MVP can be that niche differentiator.</p> <h2 id="internal-users">Internal users</h2> <p>Sometimes a good middle ground is considering getting lots of internal users if you’re really worried about bad experiences. This has it’s it’s definite downsides however, and you may not get diverse enough opinions. But it does give you some feedback while saving some face or bad experiences. I often think of the example of EC2 that was heavily used by Amazon, before being released to the world. That was a luxury Amazon had, where their customer base and their user base happened to be very similar, and they had bigger scale needs than any of their early customers, so the early internal feedback loop was a very strong signal.</p> <h2 id="summary">Summary</h2> <p>In the end however you want to approach MVPs is up to you, and if you find success with a meatier MVP than I have please don’t let me push you away from what works. But if you are having trouble shipping and are getting pushed all the time to add one more feature to that MVP before releasing it, consider stepping back and asking is this really core value for the product? Do you already have your core value? if so, consider just releasing it.</p> Surprise Go is ok for me now https://lostechies.com/ryansvihla/2018/12/13/surprise-go-is-ok/ Los Techies urn:uuid:53abf2a3-23f2-5855-0e2d-81148fb908bf Thu, 13 Dec 2018 20:23:00 +0000 I’m surprised to say this, I am ok using Go now. It’s not my style but I am able to build most anything I want to with it, and the tooling around it continues to improve. <p>I’m surprised to say this, I am ok using Go now. It’s not my style but I am able to build most anything I want to with it, and the tooling around it continues to improve.</p> <p>About 7 months ago I wrote about all the things I didn’t really care for in Go and now I either no longer am so bothered by it or things have improved.</p> <p>Go Modules so far is a huge improvement over Dep and Glide for dependency management. It’s easy to setup, performant and eliminates the GOPATH silliness. I haven’t tried it yet with some of the goofier libraries that gave me problems in the past (k8s api for example) so the jury is out on that, but again pretty impressed. I now longer have to check in vendor to speed up builds. Lesson use Go Modules.</p> <p>I pretty much stopped using channels for everything but shutdown signals and that fits my preferences pretty well, I use mutex and semaphores for my multithreaded code and feel no guilt about it. This cut out a lot of pain for me, and with the excellent race detector I feel really comfortable writing multi-threaded in Go now. Lesson, don’t use channels much.</p> <p>Lack of generics still sometimes sucks but I usually implement some crappy casting with dynamic types if I need that. I’ve sorta made my piece with just writing more code, and am no longer so hung up. Lesson relax.</p> <p>Error handling I’m still struggling with, I thought about using one of the error Wrap() libraries but an official one is in draft spec now, so I’ll wait on that. I now tend to have less nesting of functions as a result, this probably means longer functions than I like, but my code looks more “normal” now. This is a trade off I’m ok with. Lesson relax more.</p> <p>I see the main virtue of Go now that it is very popular in the infrastructure space where I am and so it’s becoming the common tongue (largely replacing Python for those sorts of tasks). For this, honestly it’s about right. It’s easy to rip out command line tools and deploy binaries for every platform with no runtime install.</p> <p>The community’s conservative attitude I sort of view as a feature now, in that there isn’t a bunch of different options that are popular and there is no arguing over what file format is used. This drove me up the wall initially, but I appreciate how much less time I spend on these things now.</p> <p>So now I suspect Go will be my “last” programming language. It’s not the one I would have chosen, but where I am at in my career, where most of my dev work is automation and tooling it fits the bill pretty well.</p> <p>Also equally important most of the people working with me didn’t have full time careers as developers or spend their time reading “Domain Driven Design” (amazing book) so adding in a bunched of nuanced stuff that maybe technically optimal but assumes the reader grasps all that nuance isn’t a good tradeoff for me.</p> <p>So I think I sorta get it now. I’ll never be a cheerleader for the language but it definitely solves my problems well enough.</p> AutoMapper 8.0.0 Released https://jimmybogard.com/automapper-8-0-0-released/ Jimmy Bogard urn:uuid:573e408e-7080-27eb-c313-b09b86aaf7c2 Sat, 17 Nov 2018 14:44:27 +0000 <p>Today we released AutoMapper 8.0.0:</p> <ul> <li><a href="http://docs.automapper.org/en/stable/8.0-Upgrade-Guide.html">Upgrade Guide</a></li> <li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v8.0.0">Release Notes</a></li> </ul> <p>AutoMapper 8.0 brings some breaking API changes, meant to simplify our configuration options which have grown quite a bit over time and remove some confusion about what configuration options were effectively equivalent. The upgrade guide walks through the</p> <p>Today we released AutoMapper 8.0.0:</p> <ul> <li><a href="http://docs.automapper.org/en/stable/8.0-Upgrade-Guide.html">Upgrade Guide</a></li> <li><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v8.0.0">Release Notes</a></li> </ul> <p>AutoMapper 8.0 brings some breaking API changes, meant to simplify our configuration options which have grown quite a bit over time and remove some confusion about what configuration options were effectively equivalent. The upgrade guide walks through the breaking changes.</p> <p>The motivation for breaking the API also came from some confusion around mapping configuration used for in-memory mappings and LINQ projections. With the API consolidation, we've unified the APIs and made explicitly clear which configuration can be used with <code>ProjectTo</code>, and which cannot.</p> <p>We've also added a new feature, <a href="http://docs.automapper.org/en/stable/Value-converters.html">Value Converters</a>, which allow you to define reusable mappers scoped to individual members.</p> <p>As well as many other little fixes :)</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Le4mCktQWhM:5DeyS6xbKb4:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Le4mCktQWhM:5DeyS6xbKb4:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Le4mCktQWhM:5DeyS6xbKb4:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Le4mCktQWhM:5DeyS6xbKb4:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Le4mCktQWhM:5DeyS6xbKb4:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/Le4mCktQWhM" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Sagas https://jimmybogard.com/life-beyond-distributed-transactions-sagas/ Jimmy Bogard urn:uuid:5327777c-2ed9-631a-dc1b-47d4f1f39994 Wed, 12 Sep 2018 15:25:04 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-sagas/">Sagas</a></li> </ul> <p><a href="https://github.com/jbogard/adventureworkscosmos">Sample code from this series</a></p> <p>So far in this series, we've looked at the ins and outs of moving beyond distributed transactions using persisted messages as a means of coordination between different</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-sagas/">Sagas</a></li> </ul> <p><a href="https://github.com/jbogard/adventureworkscosmos">Sample code from this series</a></p> <p>So far in this series, we've looked at the ins and outs of moving beyond distributed transactions using persisted messages as a means of coordination between different documents (or resources). One common question in the example I give is "how do I actually make sure either both operations happen or neither?" To answer this question, we need to recognize that this "all-or-nothing" approach is a kind of transaction. But we've already said we're trying to avoid distributed transactions!</p> <p>We won't be building a new kind of distributed transaction, but instead one that lives longer than any one single request, or a long-lived transaction. To implement a long-lived transaction, we need to look at the Saga pattern, first described in <a href="https://www.cs.cornell.edu/andru/cs711/2002fa/reading/sagas.pdf">the original Sagas paper (Molina, Salem)</a>. The most common example of a saga I've seen described is booking a vacation. When you book a vacation, you need to:</p> <ul> <li>Book a flight</li> <li>Book a hotel</li> <li>Reserve a car</li> </ul> <p>You can't do all three at once - that's like getting a conference call together with all companies and getting consensus altogether. Not going to happen! Instead, we build this overall business transaction as a series of requests and compensating actions in case something goes wrong:</p> <ul> <li>Cancel flight</li> <li>Cancel hotel</li> <li>Cancel car reservation</li> </ul> <p>Our saga operations can be linear (<a href="https://lostechies.com/jimmybogard/2013/03/14/saga-implementation-patterns-controller/">controller pattern</a>) or parallel (<a href="https://lostechies.com/jimmybogard/2013/03/11/saga-implementation-patterns-observer/">observer pattern</a>) or in microservices terms, <a href="https://microservices.io/patterns/data/saga.html">orchestration/choreography</a>.</p> <p>In order to satisfy our saga constraints, our requests must:</p> <ul> <li>Be idempotent</li> <li>Can abort</li> </ul> <p>And the compensating requests must:</p> <ul> <li>Be idempotent</li> <li>Cannot abort</li> <li>Be commutative</li> </ul> <p>In our example, we have a model that basically assumes success. We:</p> <ul> <li>Approve an order</li> <li>Deduct stock</li> </ul> <p>Let's modify this a bit to create an order fulfillment saga. For this saga, we can fulfill an order if and only if:</p> <ul> <li>Our order is approved</li> <li>We have enough stock</li> </ul> <p>If our order is rejected, we need to release the stock. If we don't have enough stock, we need to un-approve (reject) our order. And keeping with our example, we need something to coordinate this activity - our saga. But rather than just call it some generic "saga", let's give it a meaningful name - <code>OrderFulfillmentSaga</code>:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/8/2018/Picture0055.png" alt=""></p> <p>This saga will coordinate the activities of the order request and stock. And because we need this saga to have the same communication properties of the other two documents, we can simply model this saga as just another document with our inbox/outbox!</p> <p>The overall flow will be:</p> <ul> <li>Once an order is created, kick off a new order fulfillment saga</li> <li>This saga will coordinate actions between the stock and order request</li> <li>If the order is rejected, the saga needs to initiate a stock return</li> <li>If there is not enough stock, the saga needs to cancel the order</li> </ul> <p>Let's start with kicking off the saga!</p> <h3 id="kickingthingsoff">Kicking things off</h3> <p>When should we kick the saga off? It's tempting to do this in the initial request that creates a new order request, but remember - we can't put more than one document in a transaction unless we're very sure fulfillment and order requests will live close together in our Cosmos DB instance. That means we need to use our document messages to communicate with a saga - even if it doesn't exist!</p> <p>We don't want to fulfill an order request twice, and for our simple scenario let's just assume an order request can't be "retried". Originally, our <code>OrderRequest</code> created an <code>ItemPurchased</code> document message for each item - we'll remove that in favor of a single <code>OrderCreated</code> document message:</p> <pre><code class="language-c#">public class OrderCreated : IDocumentMessage { public Guid Id { get; set; } public Guid OrderId { get; set; } public List&lt;LineItem&gt; LineItems { get; set; } public class LineItem { public int ProductId { get; set; } public int Quantity { get; set; } } } </code></pre> <p>We <em>could</em> just have the <code>OrderId</code> and have the receiver then load up the <code>OrderRequest</code>, but for simplicity sake (and assuming you can't change the order after created), we'll treat this information as immutable and keep it in the message. Now when we create an <code>OrderRequest</code>, we'll also send this message:</p> <pre><code class="language-c#">public class OrderRequest : DocumentBase { public OrderRequest(ShoppingCart cart) { Id = Guid.NewGuid(); Customer = new Customer { FirstName = "Jane", MiddleName = "Mary", LastName = "Doe" }; Items = cart.Items.Select(li =&gt; new LineItem { ProductId = li.Key, Quantity = li.Value.Quantity, ListPrice = li.Value.ListPrice, ProductName = li.Value.ProductName }).ToList(); Status = Status.New; Send(new OrderCreated { Id = Guid.NewGuid(), OrderId = Id, LineItems = Items .Select(item =&gt; new OrderCreated.LineItem { ProductId = item.ProductId, Quantity = item.Quantity }) .ToList() }); } </code></pre> <p>It's not much different than our original order creation - we're just now including the document message to initiate the order fulfillment saga.</p> <p>Our handler for this document message needs to find the right <code>OrderFulfillment</code> saga document and let the saga handle the message:</p> <pre><code class="language-c#">public class OrderCreatedHandler : IDocumentMessageHandler&lt;OrderCreated&gt; { private readonly IDocumentDBRepository&lt;OrderFulfillment&gt; _repository; public OrderCreatedHandler(IDocumentDBRepository&lt;OrderFulfillment&gt; repository) =&gt; _repository = repository; public async Task Handle(OrderCreated message) { var orderFulfillment = (await _repository .GetItemsAsync(s =&gt; s.OrderId == message.OrderId)) .FirstOrDefault(); if (orderFulfillment == null) { orderFulfillment = new OrderFulfillment { Id = Guid.NewGuid(), OrderId = message.OrderId }; await _repository.CreateItemAsync(orderFulfillment); } orderFulfillment.Handle(message); await _repository.UpdateItemAsync(orderFulfillment); } } </code></pre> <p>Not shown here - but we do need to make sure we only have a single fulfillment saga per order, so we can configure inside Cosmos DB <code>OrderId</code> as a unique index.</p> <p>The <code>orderFulfillment.Handle</code> method needs to start, and request stock:</p> <pre><code class="language-c#">public void Handle(OrderCreated message) { Process(message, m =&gt; { if (IsCancelled) return; LineItems = m.LineItems .Select(li =&gt; new LineItem { ProductId = li.ProductId, AmountRequested = li.Quantity }) .ToList(); foreach (var lineItem in LineItems) { Send(new StockRequest { Id = Guid.NewGuid(), ProductId = lineItem.ProductId, AmountRequested = lineItem.AmountRequested, OrderFulfillmentId = Id }); } }); } </code></pre> <p>In my example, I've made the <code>OrderFulfillment</code> saga coordinate with <code>Stock</code> with our <code>StockRequest</code>. This is instead of <code>Stock</code> listening for <code>OrderCreated</code> itself. My general thought here is that fulfillment manages the requests/returns for stock, and any business logic around that.</p> <p>I also have a little check at the beginning - if an order is cancelled, we don't want to send out stock requests. This is the piece that's enforcing commutative requests - we might receive an order rejected notice <em>before</em> receiving the order created notice! When it comes to messaging, I always assume messages are received out of order, which means our business logic needs to be able to handle these situations.</p> <h3 id="handlingstockrequests">Handling stock requests</h3> <p>Our original <code>Stock</code> implementation was quite naive, but this time we want to more intelligently handle orders. In our stock handler, we'll still have a document per product, but now it can make a decision based on the quantity available:</p> <pre><code class="language-c#">public void Handle(StockRequest message) { Process(message, e =&gt; { if (QuantityAvailable &gt;= message.AmountRequested) { QuantityAvailable -= e.AmountRequested; Send(new StockRequestConfirmed { Id = Guid.NewGuid(), OrderFulfillmentId = e.OrderFulfillmentId, ProductId = ProductId }); } else { Send(new StockRequestDenied { Id = Guid.NewGuid(), OrderFulfillmentId = e.OrderFulfillmentId, ProductId = ProductId }); } }); } </code></pre> <p>Because we're using document messages with our inbox de-duping messages, we don't need to worry about processing the stock request twice. Our simple logic just checks the stock, and if it's successful we can deduct the stock and return a <code>StockRequestConfirmed</code> message. If not, we can return a <code>StockRequestDenied</code> message.</p> <h3 id="asuccessfulorderfulfillment">A successful order fulfillment</h3> <p>Our original logic said that "an order can be fulfilled if the order is approved and we have enough stock". Approving an order is a human decision, so we have a basic form for doing so:</p> <pre><code class="language-c#">@if (Model.Order.Status == Status.New) { &lt;form asp-controller="Order" asp-action="Reject" asp-route-id="@Model.Order.Id" method="post"&gt; &lt;input type="submit" value="Reject"/&gt; &lt;/form&gt; &lt;form asp-controller="Order" asp-action="Approve" asp-route-id="@Model.Order.Id" method="post"&gt; &lt;input type="submit" value="Approve"/&gt; &lt;/form&gt; } </code></pre> <p>And when the order is approved, we just delegate to MediatR to handle this request:</p> <pre><code class="language-c#">public class ApproveOrder { public class Request : IRequest { public Guid Id { get; set; } } public class Handler : IRequestHandler&lt;Request&gt; { private readonly IDocumentDBRepository&lt;OrderRequest&gt; _orderRepository; public Handler(IDocumentDBRepository&lt;OrderRequest&gt; orderRepository) { _orderRepository = orderRepository; } public async Task&lt;Unit&gt; Handle(Request request, CancellationToken cancellationToken) { var orderRequest = await _orderRepository.GetItemAsync(request.Id); orderRequest.Approve(); await _orderRepository.UpdateItemAsync(orderRequest); return Unit.Value; } } } </code></pre> <p>Which then delegates to our document to approve the order request:</p> <pre><code class="language-c#">public void Approve() { if (Status == Status.Approved) return; if (Status == Status.Rejected) throw new InvalidOperationException("Cannot approve a rejected order."); Status = Status.Approved; Send(new OrderApproved { Id = Guid.NewGuid(), OrderId = Id }); } </code></pre> <p>We only want to send out the <code>OrderApproved</code> message once, so just some basic status checking handles that.</p> <p>On the order fulfillment side:</p> <pre><code class="language-c#">public void Handle(OrderApproved message) { Process(message, m =&gt; { OrderApproved = true; if (IsCancelled) { ProcessCancellation(); } else { CheckForSuccess(); } }); } </code></pre> <p>Each time we receive some external notification, we need to process the success/failure path, which I'll come back to in a bit. Our handler for <code>StockRequestConfirmed</code> will be similar, except we're tracking stock on a line item by line item basis:</p> <pre><code class="language-c#">public void Handle(StockRequestConfirmed message) { Process(message, m =&gt; { var lineItem = LineItems.Single(li =&gt; li.ProductId == m.ProductId); lineItem.StockConfirmed = true; if (IsCancelled) { ReturnStock(lineItem); } else { CheckForSuccess(); } }); } </code></pre> <p>The <code>CheckForSuccess</code> method will look to see if all the order fulfillment requirements are met:</p> <pre><code class="language-c#">private void CheckForSuccess() { if (IsCancelled) return; if (LineItems.All(li =&gt; li.StockConfirmed) &amp;&amp; OrderApproved) { Send(new OrderFulfillmentSuccessful { Id = Guid.NewGuid(), OrderId = OrderId }); } } </code></pre> <p>Only if all of our stock has been confirmed and our order has been approved will we send a message back to the <code>Order</code> document to then finally complete the order:</p> <pre><code class="language-c#">public void Handle(OrderFulfillmentSuccessful message) { Process(message, m =&gt; { if (Status == Status.Rejected || Status == Status.Cancelled) return; Status = Status.Completed; }); } </code></pre> <p>The overall message flow looks something like this:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/8/2018/Picture0056.png" alt=""></p> <p>For each step along the way, we've got idempotency handled for us by the inbox/outbox structures. However, we still need to handle out-of-order messages, which is why you'll see success/fail checks every time we receive a notification.</p> <p>Now that we've got the success path taken care of, let's look at the failure paths.</p> <h3 id="cancellingtheorderfulfillment">Cancelling the order fulfillment</h3> <p>The first way our order fulfillment can be cancelled is if an order is rejected. From the web app, our <code>Order</code> document handles a rejection:</p> <pre><code class="language-c#">public void Reject() { if (Status == Status.Rejected) return; if (Status == Status.Approved) throw new InvalidOperationException("Cannot reject an approved order."); if (Status == Status.Approved) throw new InvalidOperationException("Cannot reject a completed order."); Status = Status.Rejected; Send(new OrderRejected { Id = Guid.NewGuid(), OrderId = Id }); } </code></pre> <p>Our order sends an <code>OrderRejected</code> document message that our order fulfillment document receives:</p> <pre><code class="language-c#">public void Handle(OrderRejected message) { Process(message, m =&gt; { OrderRejected = true; Cancel(); }); } </code></pre> <p>The <code>Cancel</code> method marks the order fulfillment as cancelled and then processes the cancellation:</p> <pre><code class="language-c#">private void Cancel() { IsCancelled = true; ProcessCancellation(); } </code></pre> <p>Similarly, a notification of <code>StockRequestDenied</code> will cancel the order fulfillment:</p> <pre><code class="language-c#">public void Handle(StockRequestDenied message) { Process(message, m =&gt; { Cancel(); }); } </code></pre> <p>In order to process our order fulfillment cancellation, we need to do a couple of things. First, we need to notify our <code>Order</code> document that it needs to be cancelled. And for any <code>Stock</code> items that were fulfilled, we need to return that stock:</p> <pre><code class="language-c#">private void ProcessCancellation() { if (!CancelOrderRequested &amp;&amp; !OrderRejected) { CancelOrderRequested = true; Send(new CancelOrderRequest { Id = Guid.NewGuid(), OrderId = OrderId }); } foreach (var lineItem in LineItems.Where(li =&gt; li.StockConfirmed)) { ReturnStock(lineItem); } } </code></pre> <p>Each step along the way, we keep track of what messages we've sent out so that we don't send notifications twice. To return stock:</p> <pre><code class="language-c#">private void ReturnStock(LineItem lineItem) { if (lineItem.StockReturnRequested) return; lineItem.StockReturnRequested = true; Send(new StockReturnRequested { Id = Guid.NewGuid(), ProductId = lineItem.ProductId, AmountToReturn = lineItem.AmountRequested }); } </code></pre> <p>If stock item has already had a return requested, we just skip it. Finally, the order can receive the cancel order request:</p> <pre><code class="language-c#">public void Handle(CancelOrderRequest message) { Process(message, m =&gt; { if (Status == Status.Rejected) return; Status = Status.Cancelled; }); } </code></pre> <p>With our failure flow in place, the message flows looks something like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/8/2018/Picture0057.png" alt=""></p> <p>Our order fulfillment saga can now handle the complex process of managing stock and order approvals, keeping track of each step along the way and dealing with success/failure when it receives the notifications. It handles idempotency, retries, and commutative/out-of-order messages.</p> <p>In the next post, we'll look at how we can implement the inbox/outbox pattern for other resources, allowing us to bridge to other kinds of databases where a distributed transaction is just plain impossible.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=PZ0sOaalWiM:8ktwFvqIFtg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=PZ0sOaalWiM:8ktwFvqIFtg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=PZ0sOaalWiM:8ktwFvqIFtg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=PZ0sOaalWiM:8ktwFvqIFtg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=PZ0sOaalWiM:8ktwFvqIFtg:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/PZ0sOaalWiM" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Dispatcher Failure Recovery https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/ Jimmy Bogard urn:uuid:1e3616c3-afbb-7454-4196-f04d76ad6fa6 Thu, 30 Aug 2018 13:15:14 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> </ul> <p><a href="https://github.com/jbogard/adventureworkscosmos">Sample code from this series</a></p> <p>In the last post, we looked at how we can recover from exceptions from <em>inside</em> our code handling messages. We perform some action in our document, and something</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatcher-failure-recovery/">Failure Recovery</a></li> </ul> <p><a href="https://github.com/jbogard/adventureworkscosmos">Sample code from this series</a></p> <p>In the last post, we looked at how we can recover from exceptions from <em>inside</em> our code handling messages. We perform some action in our document, and something goes wrong. But what happens when something goes wrong <em>during</em> the dispatch process:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2018/Picture0054.png" alt=""></p> <p>If our dispatcher itself fails, either:</p> <ul> <li>Pulling a message from the outbox</li> <li>Sending a message to the receiver</li> <li>Saving documents</li> </ul> <p>Then our documents are still consistent, but we've lost the execution flow to dispatch. We've mitigated our failures somewhat, but we still can have the possibility of some unrecoverable failure in our dispatcher, and no amount of exception handling can prevent a document with a message sitting in its outbox, waiting for processing.</p> <p>If we were able to wrap everything, documents and queues, in a transaction, then we could gracefully recover. However, the point of this series is that we <em>don't</em> have access to distributed transactions, so that option is out.</p> <p>What we need is some kind of background process looking for documents with pending messages in their outbox, ready to process.</p> <h3 id="designingthedispatchrescuer">Designing the dispatch rescuer</h3> <p>We already have the dispatch recovery process using async messaging to retry an individual dispatch, which works great when the failure is in application code. When the failure is environmental, our only clue something is wrong is a document with messages in their outbox.</p> <p>The general process to recover from these failures would be:</p> <ul> <li>Find any documents with unprocessed messages (ideally oldest first)</li> <li>Retry them one at a time</li> </ul> <p>We have the possibility though that we have:</p> <ul> <li>In flight dispatching</li> <li>In flight retries</li> </ul> <p>Ideally, we have some sort of lagging processor, so that when we have issues, we don't interfere with normal processing. Luckily for us, Cosmos DB already comes with the ability to be notified that documents have been changed, the <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed">Change Feed</a>, and this change feed even lets us work with a built-in delay. After each document changes, we can wait some amount of time where we assume that dispatching happened, and re-check the document to make sure dispatching occurred.</p> <p>Our rescuer will:</p> <ul> <li>Get notified when a document changes</li> <li>Check to see if there are still outbox messages to process</li> <li>Send a message to reprocess that document</li> </ul> <p>It's somewhat naive, as we'll get notified for all document changes. To make our lives a little bit easier, we can turn off immediate processing for document message dispatching and just dispatch through asynchronous processes, but it's not necessary.</p> <h3 id="creatingthechangefeedprocessor">Creating the change feed processor</h3> <p>Using the <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/change-feed#using-the-change-feed-processor-library">documentation as our guide</a>, we need to create two components:</p> <ul> <li>A document feed observer to receive document change notifications</li> <li>A change feed processor to host and invoke our observer</li> </ul> <p>Since we already have a background processor in our dispatcher, we can simply host the observer in the same endpoint. The observer won't actually be doing the work, however - we'll still send a message out to process the document. This is because NServiceBus still provides all the logic around retries and poison messages that I don't want to code again. Like most of my integrations, I kick out the work into a durable message and NServiceBus as quickly as possible.</p> <p>That makes my observer pretty small:</p> <pre><code class="language-c#">public class DocumentFeedObserver&lt;T&gt; : IChangeFeedObserver where T : DocumentBase { static ILog log = LogManager.GetLogger&lt;DocumentFeedObserver&lt;T&gt;&gt;(); public Task OpenAsync(IChangeFeedObserverContext context) =&gt; Task.CompletedTask; public Task CloseAsync( IChangeFeedObserverContext context, ChangeFeedObserverCloseReason reason) =&gt; Task.CompletedTask; public async Task ProcessChangesAsync( IChangeFeedObserverContext context, IReadOnlyList&lt;Document&gt; docs, CancellationToken cancellationToken) { foreach (var doc in docs) { log.Info($"Processing changes for document {doc.Id}"); var item = (dynamic)doc; if (item.Outbox.Count &gt; 0) { ProcessDocumentMessages message = ProcessDocumentMessages.New&lt;T&gt;(item); await Program.Endpoint.SendLocal(message); } } } } </code></pre> <p>The <code>OpenAsync</code> and <code>CloseAsync</code> methods won't do anything, all my logic is in the <code>ProcessChangesAsync</code> method. In that method, I get a collection of changed documents. I made my <code>DocumentChangeObserver</code> generic because each observer observes only one collection, so I have to create distinct observer instances per concrete <code>DocumentBase</code> type.</p> <p>In the method, I loop over all the documents passed in and look to see if the document has any messages in the outbox. If so, I'll create a new <code>ProcessDocumentMessages</code> to send to myself (as I'm also hosting NServiceBus in this application), which will then process the document messages.</p> <p>With our simple observer in place, we need to incorporate the observer in our application startup.</p> <h3 id="configuringtheobserver">Configuring the observer</h3> <p>For our observer, we have a couple of choices on how we want to process document changes. Because our observer will get called for <em>every</em> document change, we want to be careful about the work it does.</p> <p>Our original design had document messages dispatched in the same request as the original work. If we keep this, we want to make sure that we minimize the amount of rework for a document with messages. Ideally, our observer only kicks out messages when there is truly something wrong with dispatching. This will also minimize the amount of queue messages, reserving them for the error case.</p> <p>So a simple solution would be to just introduce some delay in our processing:</p> <pre><code class="language-c#">private static ChangeFeedProcessorBuilder CreateBuilder&lt;T&gt;(DocumentClient client) where T : DocumentBase { var builder = new ChangeFeedProcessorBuilder(); var uri = new Uri(CosmosUrl); var dbClient = new ChangeFeedDocumentClient(client); builder .WithHostName(HostName) .WithFeedCollection(new DocumentCollectionInfo { DatabaseName = typeof(T).Name, CollectionName = "Items", Uri = uri, MasterKey = CosmosKey }) .WithLeaseCollection(new DocumentCollectionInfo { DatabaseName = typeof(T).Name, CollectionName = "Leases", Uri = uri, MasterKey = CosmosKey }) .WithProcessorOptions(new ChangeFeedProcessorOptions { FeedPollDelay = TimeSpan.FromSeconds(15), }) .WithFeedDocumentClient(dbClient) .WithLeaseDocumentClient(dbClient) .WithObserver&lt;DocumentFeedObserver&lt;T&gt;&gt;(); return builder; } </code></pre> <p>The <code>ChangeFeedProcessorBuilder</code> is configured for every document type we want to observe, with a timespan in this example of 15 seconds. I could bump this up a bit - say to an hour or so. It will really depend on the business, the SLAs they expect for work to complete.</p> <p>Finally, in our application startup, we need to create the builder, processor, and start it all up:</p> <pre><code class="language-c#">Endpoint = await NServiceBus.Endpoint.Start(endpointConfiguration) .ConfigureAwait(false); var builder = CreateBuilder&lt;OrderRequest&gt;(client); var processor = await builder.BuildAsync(); await processor.StartAsync(); Console.WriteLine("Press any key to exit"); Console.ReadKey(); await Endpoint.Stop() .ConfigureAwait(false); await processor.StopAsync(); </code></pre> <p>With this in place, we can have a final guard against failures, assuming that someone completely pulled the plug on our application and all we have left is a document with messages sitting in its outbox.</p> <p>In our next post, we'll look at using sagas to coordinate changes between documents - what happens if we want either all, or none of our changes to be processed in our documents?</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=1XV3c9D3P0s:t99uu28uzuY:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=1XV3c9D3P0s:t99uu28uzuY:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=1XV3c9D3P0s:t99uu28uzuY:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=1XV3c9D3P0s:t99uu28uzuY:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=1XV3c9D3P0s:t99uu28uzuY:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/1XV3c9D3P0s" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Failures and Retries https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/ Jimmy Bogard urn:uuid:33b3ae6e-4542-449e-b9d9-465a738e17cf Thu, 16 Aug 2018 15:05:49 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> </ul> <p>In the last post, we looked at an example of dispatching document messages to other documents using a central dispatcher. Our example worked well in the happy path scenario, but what happens when something goes</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-failures-and-retries/">Failures and Retries</a></li> </ul> <p>In the last post, we looked at an example of dispatching document messages to other documents using a central dispatcher. Our example worked well in the happy path scenario, but what happens when something goes wrong? We of course do not want a failure in dispatching messages to make the entire request fail, but what would that mean for us?</p> <p>We described a general solution to put the dispatching work aside using queues and messaging, effectively saying "yes, dispatching failed, so let's put it aside to look at in the future". This would allow the overall main request to complete:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2018/Picture0051.png" alt=""></p> <p>Our original example also assumed that we would dispatch our messages <em>immediately</em> in the context of the same request, which isn't a bad default but maybe isn't always desirable. Let's first look at the scenario of dispatching immediately, and what failures could mean.</p> <h3 id="characterizingourfailures">Characterizing our failures</h3> <p>Dispatching failures could happen for a number of reasons, but I generally see a continuum:</p> <ul> <li>Transient</li> <li>Delayed</li> <li>Permanent</li> </ul> <p>My failures usually have some sort of time component associated with them. A transient failure may mean that if I simply try the action again immediately, it may work. This most often comes up with some sort of concurrency violation against the database.</p> <p>Delayed failures are a bit different, where I won't succeed if I try immediately, but I will if I just wait some amount of time.</p> <p>Permanent failures mean there's an unrecoverable failure, and no amount of retries will allow the operation to succeed.</p> <p>Of course, we could simply ignore failures, but our business and customers might not agree with that approach. How might we handle each of these kinds of failures?</p> <h3 id="transientfailures">Transient failures</h3> <p>If something goes wrong, can we simply retry the operation? That seems fairly straightforward - but we don't want to retry <em>too</em> many times. We can implement some simple policies, either with a hardcoded number of retries or using something like the <a href="http://www.thepollyproject.org/">Polly Project</a> to retry an action.</p> <p>To keep things simple, we can have a policy to address the most common transient failure - optimistic concurrency problems. We first want to <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/faq#how-does-the-sql-api-provide-concurrency">enable OCC checks</a>, of course:</p> <pre><code class="language-c#">public async Task&lt;Document&gt; UpdateItemAsync(T item) { var ac = new AccessCondition { Condition = item.ETag, Type = AccessConditionType.IfMatch }; return await _client.ReplaceDocumentAsync( UriFactory.CreateDocumentUri(DatabaseId, CollectionId, item.Id.ToString()), item, new RequestOptions { AccessCondition = ac }); } </code></pre> <p>When we get a concurrency violation, this results in a <code>DocumentClientException</code> with a special status code (hooray HTTP!). We need some way to wrap our request and retry if necessary - time for another MediatR behavior! This one will retry our action some number of times:</p> <pre><code class="language-c#">public class RetryUnitOfWorkBehavior&lt;TRequest, TResponse&gt; : IPipelineBehavior&lt;TRequest, TResponse&gt; { private readonly IUnitOfWork _unitOfWork; public RetryUnitOfWorkBehavior(IUnitOfWork unitOfWork) =&gt; _unitOfWork = unitOfWork; public Task&lt;TResponse&gt; Handle( TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate&lt;TResponse&gt; next) { var retryCount = 0; while (true) { try { return next(); } catch (DocumentClientException e) { if (e.StatusCode != HttpStatusCode.PreconditionFailed) throw; if (retryCount &gt;= 5) throw; _unitOfWork.Reset(); retryCount++; } } } } </code></pre> <p>If our action files due to a concurrency problem, we need to clear out our unit of work's identity map and try again:</p> <pre><code class="language-c#">public void Reset() { _identityMap.Clear(); } </code></pre> <p>Then we just need to register our behavior with the container like our original unit of work behavior, and we're set. We could have of course modified our original behavior to add retries - but I want to keep them separate because they truly are different concerns.</p> <p>That works for immediate failures, but we still haven't looked at failures in our message dispatching. For that, we'll need to involve some messaging.</p> <h3 id="deferreddispatchingwithmessagingandnservicebus">Deferred dispatching with messaging and NServiceBus</h3> <p>The immediate retries can take care of transient failures during a request, but if the there's some deeper issue, we want to defer the document message dispatching to some time in the future. To make my life easier, and not have to implement half the <a href="https://www.enterpriseintegrationpatterns.com/">Enterprise Integration Patterns book</a> myself, I'll leverage <a href="https://particular.net/nservicebus">NServiceBus</a> to manage my messaging.</p> <p>Our original dispatcher looped through our unit of work's identity map to find documents with messages that need dispatching. We'll want to augment that behavior to catch any failures, and dispatch those messages offline:</p> <pre><code class="language-c#">public interface IOfflineDispatcher { Task DispatchOffline(DocumentBase document); } </code></pre> <p>Our <code>Complete</code> method of the unit of work will now take these failed dispatches and instruct our offline dispatcher to dispatch these offline:</p> <pre><code class="language-c#">public async Task Complete() { var toSkip = new HashSet&lt;DocumentBase&gt;(DocumentBaseEqualityComparer.Instance); while (_identityMap .Except(toSkip, DocumentBaseEqualityComparer.Instance) .Any(a =&gt; a.Outbox.Any())) { var document = _identityMap .Except(toSkip, DocumentBaseEqualityComparer.Instance) .FirstOrDefault(a =&gt; a.Outbox.Any()); if (document == null) continue; var ex = await _dispatcher.Dispatch(document); if (ex != null) { toSkip.Add(document); await _offlineDispatcher.DispatchOffline(document); } } } </code></pre> <p>This is a somewhat naive implementation - it doesn't allow for partial document message processing. If a document has 3 messages, we mark the retry the entire document instead of an individual message at at time. We could manage this more granularly, by including a "retry" collection on our document. But this introduces more issues - we could still have some system failure after dispatch and our document message never make it to retry.</p> <p>When our transaction scope is individual operations instead of the entire request, we have to assume failure at <em>every</em> instance and examine what might go wrong.</p> <p>The offline dispatcher uses NServiceBus to send a durable message out:</p> <pre><code class="language-c#">public class UniformSessionOfflineDispatcher : IOfflineDispatcher { private readonly IUniformSession _uniformSession; public UniformSessionOfflineDispatcher(IUniformSession uniformSession) =&gt; _uniformSession = uniformSession; public Task DispatchOffline(DocumentBase document) =&gt; _uniformSession.Send(ProcessDocumentMessages.New(document)); } </code></pre> <p>The <a href="https://docs.particular.net/nservicebus/messaging/uniformsession"><code>IUniformSession</code> piece</a> from NServiceBus to send a message from any context (in a web application, backend service, etc.). Our message just includes the document ID and type:</p> <pre><code class="language-c#">public class ProcessDocumentMessages : ICommand { public Guid DocumentId { get; set; } public string DocumentType { get; set; } // For NSB public ProcessDocumentMessages() { } private ProcessDocumentMessages(Guid documentId, string documentType) { DocumentId = documentId; DocumentType = documentType; } public static ProcessDocumentMessages New&lt;TDocument&gt;( TDocument document) where TDocument : DocumentBase { return new ProcessDocumentMessages( document.Id, document.GetType().AssemblyQualifiedName); } } </code></pre> <p>We can use this information to load our document from the repository. With this message in place, we now need the component that will <em>receive</em> our message. For this, it will really depend on our deployment, but for now I'll just make a .NET Core console application that includes our NServiceBus hosting piece and a handler for that message:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2018/Picture0052.png" alt=""></p> <p>I won't dig too much into the NServiceBus configuration as it's really not that germane, but let's look at the handler for that message:</p> <pre><code class="language-c#">public class ProcessDocumentMessagesHandler : IHandleMessages&lt;ProcessDocumentMessages&gt; { private readonly IDocumentMessageDispatcher _dispatcher; public ProcessDocumentMessagesHandler(IDocumentMessageDispatcher dispatcher) =&gt; _dispatcher = dispatcher; public Task Handle(ProcessDocumentMessages message, IMessageHandlerContext context) =&gt; _dispatcher.Dispatch(message); } </code></pre> <p>Also not very exciting! This class is what NServiceBus dispatches the durable message to. For our simple example, I'm using RabbitMQ, so if something goes wrong our message goes into a queue:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2018/Picture0053.png" alt=""></p> <p>Our handler receives this message to process it. The dispatcher is slightly different, as it needs to work with a message instead of an actual document, so it needs to load it up first:</p> <pre><code class="language-c#">public async Task Dispatch(ProcessDocumentMessages command) { var documentType = Type.GetType(command.DocumentType); var repository = GetRepository(documentType); var document = await repository.FindById(command.DocumentId); if (document == null) { return; } foreach (var message in document.Outbox.ToArray()) { var handler = GetHandler(message); await handler.Handle(message, _serviceFactory); document.ProcessDocumentMessage(message); await repository.Update(document); } } </code></pre> <p>One other key difference in this dispatch method is that we don't wrap anything in any kind of <code>try-catch</code> to report back errors. In an in-process dispatch mode, we still want the main request to succeed. In the offline processing mode, we're only dealing with document dispatching. And since we're using NServiceBus, we can rely on its <a href="https://docs.particular.net/nservicebus/recoverability/">built-in recoverability behavior</a> with immediate and delayed retries, eventually moving messages to an error queue.</p> <p>With this in place, we can put forth an optimistic, try-immediately policy, but fall back on durable messaging if something goes wrong with our immediate dispatch. It's not bulletproof, and in the next post, I'll look at how we can implement some sort of doomsday scenario, where we have a failure between dispatch failure and queuing the retry message.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=5IR7l0HDz8s:g0VmD5uNifc:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=5IR7l0HDz8s:g0VmD5uNifc:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=5IR7l0HDz8s:g0VmD5uNifc:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=5IR7l0HDz8s:g0VmD5uNifc:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=5IR7l0HDz8s:g0VmD5uNifc:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/5IR7l0HDz8s" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Dispatching Example https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/ Jimmy Bogard urn:uuid:e4c35daa-5c50-d5b4-c719-e7a58e27ac7e Mon, 13 Aug 2018 19:24:06 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> </ul> <p>In the last post, we looked at refactoring our documents to use messaging to communicate changes. We're still missing something, however - the dispatcher:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2018/Picture0050.png" alt=""></p> <p>Our dispatcher is the main component that facilitates document communication. For a given document,</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Dispatching Example</a></li> </ul> <p>In the last post, we looked at refactoring our documents to use messaging to communicate changes. We're still missing something, however - the dispatcher:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2018/Picture0050.png" alt=""></p> <p>Our dispatcher is the main component that facilitates document communication. For a given document, it needs to:</p> <ul> <li>Read messages out of a document's outbox</li> <li>Find the the document message handler for each, and invoke</li> <li>Manage failures for a document message handler</li> </ul> <p>We'll tackle that last piece in a future post. There's one piece we do need to think about first - where does the dispatcher get its list of documents to dispatch messages to?</p> <p>Before we get to the dispatcher, we need to solve for this problem - knowing which documents need dispatching!</p> <h3 id="introducingaunitofwork">Introducing a unit of work</h3> <p>For a given request, we'll load up a document and affect some change with it. We already have a pinch point in which our documents are loaded - the repository. If we want to dispatch document messages in the same request, we'll need to keep track of our documents that we've loaded in a request. For this, we can use a <a href="https://martinfowler.com/eaaCatalog/unitOfWork.html">Unit of Work</a>.</p> <p>Any ORM that you use will implement this pattern - for Entity Framework, for example, the DbContext is your Unit of Work. For Cosmos DB's SDK, there really isn't a concept of these ORM patterns. We have to introduce them ourselves.</p> <p>Our unit of work will keep track of documents for a given session/request, letting us interact with the loaded documents during the dispatch phase of a request. Our Unit of Work will also serve as an <a href="https://www.martinfowler.com/eaaCatalog/identityMap.html">identity map</a> - the thing that makes sure that when we load a document in a request, it's only loaded once. Here's our basic <code>IUnitOfWork</code> interface:</p> <pre><code class="language-c#">public interface IUnitOfWork { T Find&lt;T&gt;(Guid id) where T : DocumentBase; void Register(DocumentBase document); void Register(IEnumerable&lt;DocumentBase&gt; documents); Task Complete(); } </code></pre> <p>The implementation contains the "identity map" as a simple <code>HashSet</code></p> <pre><code class="language-c#">public class UnitOfWork : IUnitOfWork { private readonly ISet&lt;DocumentBase&gt; _identityMap = new HashSet&lt;DocumentBase&gt;(DocumentBaseEqualityComparer.Instance); </code></pre> <p>Then we can register an instance with our <code>UnitOfWork</code>:</p> <pre><code class="language-c#">public void Register(DocumentBase document) { _identityMap.Add(document); } public void Register(IEnumerable&lt;DocumentBase&gt; documents) { foreach (var document in documents) { Register(document); } } </code></pre> <p>Finding an existing <code>DocumentBase</code> just searches our identity map:</p> <pre><code class="language-c#">public T Find&lt;T&gt;(Guid id) where T : DocumentBase =&gt; _identityMap.OfType&lt;T&gt;().FirstOrDefault(ab =&gt; ab.Id == id); </code></pre> <p>We'll come back to the <code>Complete</code> method, because this will be the part where we dispatch. We still need the part where we register our documents in the unit of work, and this will be in our repository implementation:</p> <pre><code class="language-c#">public async Task&lt;T&gt; GetItemAsync(Guid id) { try { var root = _unitOfWork.Find&lt;T&gt;(id); if (root != null) return root; Document document = await _client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id.ToString())); var item = (T)(dynamic)document; _unitOfWork.Register(item); return item; } catch (DocumentClientException e) { if (e.StatusCode == System.Net.HttpStatusCode.NotFound) { return null; } throw; } } </code></pre> <p>We'll repeat this for any method in our repository that loads a document, registering and looking up in our unit of work.</p> <p>With a means to track our documents, let's see how we'll dispatch.</p> <h3 id="dispatchingdocumentmessages">Dispatching document messages</h3> <p>Our dispatcher's fairly straightforward - the only wrinkle is we'll need to surface any potential exception out. Instead of just crashing in case something goes awry, we'll want to just surface the exception and let the caller decide how to handle failures:</p> <pre><code class="language-c#">public interface IDocumentMessageDispatcher { Task&lt;Exception&gt; Dispatch(DocumentBase document); } </code></pre> <p>If I'm dispatching a document message to three handlers, I don't want one handler prevent dispatching to others.</p> <p>We have another challenge - our interface is not generic for dispatching, but the handlers and repositories are! We'll have to do some generics tricks to unwrap our base type to the correct generic types. The basic flow will be:</p> <ul> <li>For each document message:</li> <li>Find document message handlers</li> <li>Call the handler</li> <li>Remove the document message from the outbox</li> <li>Save the document</li> </ul> <p>Here's our basic implementation:</p> <pre><code class="language-c#">public async Task&lt;Exception&gt; Dispatch(DocumentBase document) { var repository = GetRepository(document.GetType()); foreach (var documentMessage in document.Outbox.ToArray()) { try { var handler = GetHandler(documentMessage); await handler.Handle(documentMessage, _serviceFactory); document.ProcessDocumentMessage(documentMessage); await repository.Update(document); } catch (Exception ex) { return ex; } } return null; } </code></pre> <p>We first build a repository based on the document type. Next, we loop through each document message in the outbox. For each document message, we'll find the handler(s) and call them. Once those succeed, we'll process our document message (removing it from the outbox) and update our document. We want to update for each document message in the outbox - if there's 3 document messages in the outbox, we save 3 times to make sure once message completes we don't have to go back to it if something goes wrong.</p> <p>The <code>GetHandler</code> method is a bit wonky, because we're bridging generics. Basically, we create a non-generic version of the document message handlers:</p> <pre><code class="language-c#">private abstract class DomainEventDispatcherHandler { public abstract Task Handle( IDocumentMessage documentMessage, ServiceFactory factory); } </code></pre> <p>Then create a generic version that inherits from this:</p> <pre><code class="language-c#">private class DomainEventDispatcherHandler&lt;T&gt; : DomainEventDispatcherHandler where T : IDocumentMessage { public override Task Handle(IDocumentMessage documentMessage, ServiceFactory factory) { return HandleCore((T)documentMessage, factory); } private static async Task HandleCore(T domainEvent, ServiceFactory factory) { var handlers = factory.GetInstances&lt;IDocumentMessageHandler&lt;T&gt;&gt;(); foreach (var handler in handlers) { await handler.Handle(domainEvent); } } } </code></pre> <p>I've used this pattern countless times, basically to satisfy the compiler. I've tried <code>dynamic</code> too but it introduces other problems. Then to call this, our <code>GetHandler</code> instantiates the generic version, but returns the non-generic base class:</p> <pre><code class="language-c#">private static DomainEventDispatcherHandler GetHandler( IDocumentMessage documentMessage) { var genericDispatcherType = typeof(DomainEventDispatcherHandler&lt;&gt;) .MakeGenericType(documentMessage.GetType()); return (DomainEventDispatcherHandler) Activator.CreateInstance(genericDispatcherType); } </code></pre> <p>With this, I can have non-generic code still call into generics. I'll do something similar with the repository:</p> <pre><code class="language-c#">private abstract class DocumentDbRepo { public abstract Task&lt;DocumentBase&gt; FindById(Guid id); public abstract Task Update(DocumentBase document); } </code></pre> <p>With these bridges in place, my dispatcher can interact with the concrete generic repositories and handlers. The final piece is the document cleaning up its outbox:</p> <pre><code class="language-c#">public void ProcessDocumentMessage( IDocumentMessage documentMessage) { _outbox?.Remove(documentMessage); } </code></pre> <p>With our dispatcher done, and our unit of work in place, we can now focus on the piece that will <em>invoke</em> our unit of work.</p> <h3 id="buildingamediatrbehavior">Building a MediatR behavior</h3> <p>We want our unit of work to complete with each request once everything is "done". For ASP.NET Core applications, this might mean some kind of filter. For us, I want the dispatching to work really with any context, so one possibility is to use a <a href="https://github.com/jbogard/MediatR/wiki/Behaviors">MediatR behavior</a> to wrap our MediatR handler. A filter would work too of course, but we'd need to mimic our filters in tests if we want everything to still get dispatched appropriately.</p> <p>The behavior is pretty straightforward:</p> <pre><code class="language-c#">public class UnitOfWorkBehavior&lt;TRequest, TResponse&gt; : IPipelineBehavior&lt;TRequest, TResponse&gt; { private readonly IUnitOfWork _unitOfWork; public UnitOfWorkBehavior(IUnitOfWork unitOfWork) { _unitOfWork = unitOfWork; } public async Task&lt;TResponse&gt; Handle( TRequest request, CancellationToken token, RequestHandlerDelegate&lt;TResponse&gt; next) { var response = await next(); await _unitOfWork.Complete(); return response; } } </code></pre> <p>We do the main work, then once that's finished, complete our unit of work.</p> <p>That's all of our infrastructure pieces, and the last part is registering these components with the DI container at startup:</p> <pre><code class="language-c#">services.AddMediatR(typeof(Startup)); services.AddScoped(typeof(IDocumentDBRepository&lt;&gt;), typeof(DocumentDBRepository&lt;&gt;)); services.AddScoped&lt;IUnitOfWork, UnitOfWork&gt;(); services.AddScoped&lt;IDocumentMessageDispatcher, DocumentMessageDispatcher&gt;(); services.AddScoped(typeof(IPipelineBehavior&lt;,&gt;), typeof(UnitOfWorkBehavior&lt;,&gt;)); services.Scan(c =&gt; { c.FromAssembliesOf(typeof(Startup)) .AddClasses(t =&gt; t.AssignableTo(typeof(IDocumentMessageHandler&lt;&gt;))) .AsImplementedInterfaces() .WithTransientLifetime(); }); </code></pre> <p>We add our MediatR handlers using the <a href="https://www.nuget.org/packages/MediatR.Extensions.Microsoft.DependencyInjection">MediatR.Extensions.Microsoft.DependencyInjection</a> package, our generic repository, unit of work, dispatcher, and unit of work behavior. Finally, we add all of the <code>IDocumentMessageHandler</code> implementations using <a href="https://github.com/khellang/Scrutor">Scrutor</a>, making our lives much easier to add all the handlers in one go.</p> <p>With all this in place, we can run and verify that our handlers fire and we can see the message in the inbox of the Stock item:</p> <pre><code class="language-json">{ "QuantityAvailable": 99, "ProductId": 771, "id": "cfbb6333-ed9f-49e7-8640-bb920d5c9106", "Outbox": { "$type": "System.Collections.Generic.HashSet`1[[AdventureWorksCosmos.UI.Infrastructure.IDocumentMessage, AdventureWorksCosmos.UI]], System.Core", "$values": [] }, "Inbox": { "$type": "System.Collections.Generic.HashSet`1[[AdventureWorksCosmos.UI.Infrastructure.IDocumentMessage, AdventureWorksCosmos.UI]], System.Core", "$values": [ { "$type": "AdventureWorksCosmos.UI.Models.Orders.ItemPurchased, AdventureWorksCosmos.UI", "ProductId": 771, "Quantity": 1, "Id": "2ab2108c-9698-49e8-93de-a3ced453836a" } ] }, "_rid": "WQk4AKSQMwACAAAAAAAAAA==", "_self": "dbs/WQk4AA==/colls/WQk4AKSQMwA=/docs/WQk4AKSQMwACAAAAAAAAAA==/", "_etag": "\"060077c2-0000-0000-0000-5b71d8a10000\"", "_attachments": "attachments/", "_ts": 1534187681 } </code></pre> <p>We now have effective document messaging between our documents!</p> <p>Well, almost.</p> <p>In the next post, we'll walk through what to do when things go wrong: failures and retries.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=h_qmyNvXZ8M:w2kXVjou_i4:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=h_qmyNvXZ8M:w2kXVjou_i4:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=h_qmyNvXZ8M:w2kXVjou_i4:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=h_qmyNvXZ8M:w2kXVjou_i4:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=h_qmyNvXZ8M:w2kXVjou_i4:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/h_qmyNvXZ8M" height="1" width="1" alt=""/> Life Beyond Distributed Transactions: An Apostate's Implementation - Document Example https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/ Jimmy Bogard urn:uuid:d1ccd01b-f000-9339-f675-b0d934468ef8 Thu, 09 Aug 2018 15:59:41 +0000 <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> </ul> <p>In the last post, I walked through the "happy path" scenario of coordinated communication/activities between multiple resources that otherwise can't participate in a transaction. In this post, I'll walk through a code example of building out document coordination in</p> <p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/life-beyond-transactions-implementation-primer/">A Primer</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-aggregate-coordination/">Document Coordination</a></li> <li><a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-document-example/">Document Example</a></li> </ul> <p>In the last post, I walked through the "happy path" scenario of coordinated communication/activities between multiple resources that otherwise can't participate in a transaction. In this post, I'll walk through a code example of building out document coordination in <a href="https://azure.microsoft.com/en-us/services/cosmos-db/">Azure Cosmos DB</a>. My starting point is this set of code for approving an invoice and updating stock:</p> <pre><code class="language-c#">[HttpPost] public async Task&lt;IActionResult&gt; Approve(Guid id) { var orderRequest = await _orderRepository.GetItemAsync(id); orderRequest.Approve(); await _orderRepository.UpdateItemAsync(orderRequest); foreach (var lineItem in orderRequest.Items) { var stock = (await _stockRepository .GetItemsAsync(s =&gt; s.ProductId == lineItem.ProductId)) .FirstOrDefault(); stock.QuantityAvailable -= lineItem.Quantity; await _stockRepository.UpdateItemAsync(stock); } return RedirectToPage("/Orders/Show", new { id }); } </code></pre> <p>The repositories in my example are straight from the example code when you download a sample application in the Azure Portal, and just wrap the underlying <a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.azure.documents.client.documentclient?view=azure-dotnet">DocumentClient</a>.</p> <h3 id="modelingourdocument">Modeling our document</h3> <p>First, we need to baseline our document messages. These objects can be POCOs, but we still need some base information. Since we want to enforce idempotent actions, we need to be able to distinguish between different messages. The easiest way to do so is with a unique identifier per message:</p> <pre><code class="language-c#">public interface IDocumentMessage { Guid Id { get; } } </code></pre> <p>Since our documents need to store and process messages in an inbox/outbox, we need to build out our base Document class to include these items. We can also build a completely separate object for our inbox/outbox, but for simplicity sake, we'll just use a base class:</p> <pre><code class="language-c#">public abstract class DocumentBase { [JsonProperty(PropertyName = "id")] public Guid Id { get; set; } private HashSet&lt;IDocumentMessage&gt; _outbox = new HashSet&lt;IDocumentMessage&gt;(DocumentMessageEqualityComparer.Instance); private HashSet&lt;IDocumentMessage&gt; _inbox = new HashSet&lt;IDocumentMessage&gt;(DocumentMessageEqualityComparer.Instance); public IEnumerable&lt;IDocumentMessage&gt; Outbox { get =&gt; _outbox; protected set =&gt; _outbox = value == null ? new HashSet&lt;IDocumentMessage&gt;(DocumentMessageEqualityComparer.Instance) : new HashSet&lt;IDocumentMessage&gt;(value, DocumentMessageEqualityComparer.Instance); } public IEnumerable&lt;IDocumentMessage&gt; Inbox { get =&gt; _inbox; protected set =&gt; _inbox = value == null ? new HashSet&lt;IDocumentMessage&gt;(DocumentMessageEqualityComparer.Instance) : new HashSet&lt;IDocumentMessage&gt;(value, DocumentMessageEqualityComparer.Instance); } </code></pre> <p>Each of our mailboxes are a <a href="https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.hashset-1?view=netframework-4.7.2">HashSet</a>, to ensure we enforce uniqueness of document messages inside our document. We wrap our mailboxes in a couple of convenience properties for storage purposes (since our documents are serialized using JSON.NET, we have to model appropriately for its serialization needs).</p> <p>We're using a custom equality comparer for document messages based on that interface and ID we added earlier:</p> <pre><code class="language-c#">public class DocumentMessageEqualityComparer : IEqualityComparer&lt;IDocumentMessage&gt; { public static readonly DocumentMessageEqualityComparer Instance = new DocumentMessageEqualityComparer(); public bool Equals(IDocumentMessage x, IDocumentMessage y) { return x.Id == y.Id; } public int GetHashCode(IDocumentMessage obj) { return obj.Id.GetHashCode(); } } </code></pre> <p>With this, we can make sure that our document messages only exist in our inbox/outboxes once (assuming we can pick unique GUIDs).</p> <p>Next, we need to be able to send a message in our <code>DocumentBase</code> class:</p> <pre><code class="language-c#">protected void Send(IDocumentMessage documentMessage) { if (_outbox == null) _outbox = new HashSet&lt;IDocumentMessage&gt;(DocumentMessageEqualityComparer.Instance); _outbox.Add(documentMessage); } </code></pre> <p>We have to check that the outbox exists and create it if it's not (due to serialization, it might not exist), then simply add the document message to the outbox.</p> <p>To process a document message, we need to make sure this action is idempotent. To check for idempotency, we'll examine our <code>Inbox</code> before executing the action. We can wrap this all up in a single method that our derived documents will use:</p> <pre><code class="language-c#">protected void Process&lt;TDocumentMessage&gt;( TDocumentMessage documentMessage, Action&lt;TDocumentMessage&gt; action) where TDocumentMessage : IDocumentMessage { if (_inbox == null) _inbox = new HashSet&lt;IDocumentMessage&gt;(DocumentMessageEqualityComparer.Instance); if (_inbox.Contains(documentMessage)) return; action(documentMessage); _inbox.Add(documentMessage); } </code></pre> <p>Our derived documents will need to call this method to process their messages with the idempotency check. Once a message is processed successfully, we'll add it to the inbox. And since our transaction boundary is the document, if something fails, the action never happened and the message never gets stored to the inbox. Only by keeping our inbox, outbox, and business data inside a transaction boundary can we guarantee all either succeeds or fails.</p> <h3 id="refactoringouraction">Refactoring our action</h3> <p>Now that we have our basic mechanism of storing and processing messages, we can refactor our original action. It was split basically into two actions - one of approving the invoice, and another of updating stock.</p> <p>We need to "send" a message from our Order to Stock. But what should that message look like? A few options:</p> <ul> <li>Command, "update stock"</li> <li>Event, "order approved"</li> <li>Event, "item purchased"</li> </ul> <p>If I go with a command, I'm coupling the primary action with the intended side effect. But what if this side effect needs to change? Be removed? I don't want burden the main Order logic with that.</p> <p>What about the first event, "order approved"? I could go with this - but looking at the work done and communication, Stock doesn't care that an order was approved, it only really cares if an item was purchased. Approvals are really the internal business rules of an order, but the ultimate side effect is that items finally become "purchased" at this point in time. So if I used "order approved", I'd be coupling Stock to the internal business rules of Order. Even though it's an event, "order approved" concerns internal business processes that other documents/services shouldn't care about.</p> <p>Finally, we have "item purchased". This most closely matches what Stock cares about, and removes any kind of process coupling between these two aggregates. If I went with the macro event, "order approved", I'd still have to translate that to what it means for Stock.</p> <p>With this in mind, I'll create a document message representing this event:</p> <pre><code class="language-c#">public class ItemPurchased : IDocumentMessage { public int ProductId { get; set; } public int Quantity { get; set; } public Guid Id { get; set; } } </code></pre> <p>I know how much of which product was purchased, and that's enough for Stock to deal with the consequences of that event.</p> <p>My <code>Order</code> class then models its <code>Approve</code> method to include sending these new messages:</p> <pre><code class="language-c#">public void Approve() { Status = Status.Approved; foreach (var lineItem in Items) { Send(new ItemPurchased { ProductId = lineItem.ProductId, Quantity = lineItem.Quantity, Id = Guid.NewGuid() }); } } </code></pre> <p>I don't have an idempotency check here (if the order is already approved, do nothing), but you get the idea.</p> <p>On the Stock side, I need to add a method to process the <code>ItemApproved</code> message:</p> <pre><code class="language-c#">public void Handle(ItemPurchased message) { Process(message, e =&gt; { QuantityAvailable -= e.Quantity; }); } </code></pre> <p>Finally, we need some way of linking our <code>ItemPurchased</code> message with the <code>Stock</code>, and that's the intent of an <code>IDocumentMessageHandler</code>:</p> <pre><code class="language-c#">public interface IDocumentMessageHandler&lt;in T&gt; where T : IDocumentMessage { Task Handle(T message); } </code></pre> <p>The part of our action that loaded up each <code>Stock</code> is the code we'll put into our handler:</p> <pre><code class="language-c#">public class UpdateStockFromItemPurchasedHandler : IDocumentMessageHandler&lt;ItemPurchased&gt; { private readonly IDocumentDBRepository&lt;Stock&gt; _repository; public UpdateStockFromItemPurchasedHandler( IDocumentDBRepository&lt;Stock&gt; repository) =&gt; _repository = repository; public async Task Handle(ItemPurchased message) { var stock = (await _repository .GetItemsAsync(s =&gt; s.ProductId == message.ProductId)) .Single(); stock.Handle(message); await _repository.UpdateItemAsync(stock); } } </code></pre> <p>Not that exciting, as our document will handle the real business logic of handling the request. This class just connects the dots between an <code>IDocumentMessageHandler</code> and some <code>DocumentBase</code> instance.</p> <p>With these basic building blocks, we'll modify our action to only update the <code>Order</code> instance:</p> <pre><code class="language-c#">[HttpPost] public async Task&lt;IActionResult&gt; Approve(Guid id) { var orderRequest = await _orderRepository.GetItemAsync(id); orderRequest.Approve(); await _orderRepository.UpdateItemAsync(orderRequest); return RedirectToPage("/Orders/Show", new { id }); } </code></pre> <p>Now when we approve our order, we only create messages in the outbox, which get persisted along with the order. If I look at the saved order in Cosmos DB, I can verify the items are persisted:</p> <pre><code class="language-json">{ "Items": [ { "Quantity": 1, "ListPrice": 3399.99, "ProductId": 771, "ProductName": "Mountain-100 Silver, 38", "Subtotal": 3399.99 } ], "Status": 2, "Total": 3399.99, "Customer": { "FirstName": "Jane", "LastName": "Doe", "MiddleName": "Mary" }, "id": "8bf4bda2-3796-431e-9936-8511243352d2", "Outbox": { "$type": "System.Collections.Generic.HashSet`1[[AdventureWorksCosmos.UI.Infrastructure.IDocumentMessage, AdventureWorksCosmos.UI]], System.Core", "$values": [ { "$type": "AdventureWorksCosmos.UI.Models.Orders.ItemPurchased, AdventureWorksCosmos.UI", "ProductId": 771, "Quantity": 1, "Id": "987ce801-e7cf-4abf-aba7-83d7eed00610" } ] }, "Inbox": { "$type": "System.Collections.Generic.HashSet`1[[AdventureWorksCosmos.UI.Infrastructure.IDocumentMessage, AdventureWorksCosmos.UI]], System.Core", "$values": [] }, "_rid": "lJFnANVMlwADAAAAAAAAAA==", "_self": "dbs/lJFnAA==/colls/lJFnANVMlwA=/docs/lJFnANVMlwADAAAAAAAAAA==/", "_etag": "\"02002652-0000-0000-0000-5b48f2140000\"", "_attachments": "attachments/", "_ts": 1531507220 } </code></pre> <p>In order to get that polymorphic behavior for my <code>IDocumentMessage</code> collections, I needed to configure the JSON serializer settings in my repository:</p> <pre><code class="language-c#">_client = new DocumentClient(new Uri(Endpoint), Key, new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.Auto }); </code></pre> <p>With these pieces in place, I've removed the process coupling between updating an order's status and updating stock items using document messaging. Of course, we don't actually have anything <em>dispatching</em> our messages. We'll cover the infrastructure for dispatching messages in the next post.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=gm7Aru-c8xk:ypdwK1jbSVk:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=gm7Aru-c8xk:ypdwK1jbSVk:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=gm7Aru-c8xk:ypdwK1jbSVk:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=gm7Aru-c8xk:ypdwK1jbSVk:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=gm7Aru-c8xk:ypdwK1jbSVk:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/gm7Aru-c8xk" height="1" width="1" alt=""/> Collaboration vs. Critique https://lostechies.com/derekgreer/2018/05/18/collaboration-vs-critique/ Los Techies urn:uuid:8a2d0bfb-9efe-2fd2-1e9b-6ba6d06055da Fri, 18 May 2018 17:00:00 +0000 While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique. <p>While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique.</p> <p>To help illustrate what I’ve seen happen countless times both in catch-up design sessions and code reviews, consider the following two scenarios:</p> <h3 id="scenario-1">Scenario 1</h3> <p>Tom and Sally are both developers on a team maintaining a large-scale application. Tom takes the next task in the development queue which happens to have some complex processes that will need to be addressed. Being the good development team that they are, both Tom and Sally are aware of the requirements of the application (i.e. how the app needs to work from the user’s perspective), but they have deferred design-level discussions until the time of implementation. After Tom gets into the process a little, seeing that the problem is non-trivial, he pings Sally to help him brainstorm different approaches to solving the problem. Tom and Sally have been working together for over a year and have become accustomed to these sort of ad-hoc design sessions. As they begin discussing the problem, they each start tossing ideas out on the proverbial table resulting in multiple approaches to compare and contrast. The nature of the discussion is such that neither Tom nor Sally are embarrassed or offended when the other points out flaws in the design because there’s a sense of safety in their mutual understanding that this is a brainstorming session and that neither have thought in depth about the solutions being set froth yet. Tom throws out a couple of ideas, but ends up shooting them down himself as he uses Sally as a sounding board for the ideas. Sally does the same, but toward the end of the conversation suggests a slight alteration to one of Tom’s initial suggestions that they think may make it work after all. They end the session both with a sense that they’ve worked together to arrive at the best solution.</p> <h3 id="scenario-2">Scenario 2</h3> <p>Bill and Jake are developers on another team. They tend to work in a more siloed fashion, but they do rely upon one another for help from time to time and they are required to do code reviews prior to their code being merged into the main branch of development. Bill takes the next task in the development queue and spends the better part of an afternoon working out a solution with a basic working skeleton of the direction he’s going. The next day he decides that it might be good to have Jake take a look at the design to make him aware of the direction. Seeing where Bill’s design misses a few opportunities to make the implementation more adaptable to changes in the future, Jake points out where he would have done things differently. Bill acknowledges that Jake’s suggestions would be better and would have probably been just as easy to implement from the beginning, but inwardly he’s a bit disappointed that Jake didn’t like his design as-is and that he has to do some rework. In the end, Bill is left with a feeling of critique rather than collaboration.</p> <p>Whether it’s a high-level UML diagram or working code, how one person tends to perceive feedback on the ideas comprising a potential solution has everything to do with timing. It can be the exact same feedback they would have received either way, but when the feedback occurs often makes a difference between whether it’s perceived as collaboration or critique. It’s all about when the conversation happens.</p> Testing Button Click in React with Jest https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:a8e7d9fd-d718-a072-55aa-0736ac21bec4 Mon, 07 May 2018 17:01:59 +0000 When building React applications you will most likely find yourself using Jest as your testing framework.  Jest has some really, really cool features built in.  But when you use Enzyme you can take your testing to the nest level. One really cool feature is the ability to test click events via Enzyme to ensure your &#8230; <p><a href="https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Testing Button Click in React with&#160;Jest</span></a></p> <p>When building <a href="https://reactjs.org/" target="_blank" rel="noopener">React</a> applications you will most likely find yourself using <a href="https://facebook.github.io/jest" target="_blank" rel="noopener">Jest</a> as your testing framework.  Jest has some really, really cool features built in.  But when you use <a href="http://airbnb.io/enzyme/docs/guides/jest.html" target="_blank" rel="noopener">Enzyme</a> you can take your testing to the nest level.</p> <p>One really cool feature is the ability to test click events via Enzyme to ensure your code responds as expected.</p> <p>Before we get started you are going to want to make sure you have Jest and Enzyme installed in your application.</p> <ul> <li>Installing <a href="https://github.com/airbnb/enzyme/blob/master/docs/installation/README.md" target="_blank" rel="noopener">Enzyme</a></li> <li>Installing <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Jest</a></li> </ul> <p>Sample code under test</p> <p><img data-attachment-id="111" data-permalink="https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/screen-shot-2018-05-07-at-12-52-56-pm/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640" data-orig-size="580,80" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Screen Shot 2018-05-07 at 12.52.56 PM" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640?w=580" class="alignnone size-full wp-image-111" src="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640" alt="Screen Shot 2018-05-07 at 12.52.56 PM" srcset="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png 580w, https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=300 300w" sizes="(max-width: 580px) 100vw, 580px" /></p> <p>What I would like to be able to do is pull the button out of my component and test the <code>onClick</code> event handler.</p> <div class="code-snippet"> <pre class="code-content"> // Make sure you have your imports setup correctly import React from 'react'; import { shallow } from 'enzyme'; it('When active link clicked, will push correct filter message', () =&gt; { let passedFilterType = ''; const handleOnTotalsFilter = (filterType) =&gt; { passedFilterType = filterType; }; const accounts = {}; const wrapper = shallow(&lt;MyComponent accounts={accounts} filterHeader="" onTotalsFilter={handleOnTotalsFilter} /&gt;); const button = wrapper.find('#archived-button'); button.simulate('click'); expect(passedFilterType).toBe(TotalsFilterType.archived); }); </pre> </div> <p>Lets take a look at the test above</p> <ol> <li>First we are going to create a callback (click handler) to catch the bubbled up values.</li> <li>We use Enzyme to create our component <code>MyComponent</code></li> <li>We use the .find() on our wrapped component to find our &lt;Button /&gt; by id</li> <li>After we get our button we can call .simulate(&#8216;click&#8217;) which will act as a user clicking the button.</li> <li>We can assert that the expected value bubbles up.</li> </ol> <p>As you can see, simulating a click event of a rendered component is very straight forward, yet very powerful.</p> <p>Till next time,</p> Lessons from a year of Golang https://lostechies.com/ryansvihla/2018/05/07/lessons-from-a-year-of-go/ Los Techies urn:uuid:e37d6484-2864-cc2a-034c-cac3d89dede7 Mon, 07 May 2018 13:16:00 +0000 I’m hoping to share in a non-negative way help others avoid the pitfalls I ran into with my most recent work building infrastructure software on top of a Kubernetes using Go, it sounded like an awesome job at first but I ran into a lot of problems getting productive. <p>I’m hoping to share in a non-negative way help others avoid the pitfalls I ran into with my most recent work building infrastructure software on top of a Kubernetes using Go, it sounded like an awesome job at first but I ran into a lot of problems getting productive.</p> <p>This isn’t meant to evaluate if you should pick up Go or tell you what you should think of it, this is strictly meant to help people out that are new to the language but experienced in Java, Python, Ruby, C#, etc and have read some basic Go getting started guide.</p> <h2 id="dependency-management">Dependency management</h2> <p>This is probably the feature most frequently talked about by newcomers to Go and with some justification, as dependency management been a rapidly shifting area that’s nothing like what experienced Java, C#, Ruby or Python developers are used to.</p> <p>I’ll cut to the chase the default tool now is <a href="https://github.com/golang/dep">Dep</a> all other tools I’ve used such as <a href="https://github.com/Masterminds/glide">Glide</a> or <a href="https://github.com/tools/godep">Godep</a> they’re now deprecated in favor of Dep, and while Dep has advanced rapidly there are some problems you’ll eventually run into (or I did):</p> <ol> <li>Dep hangs randomly and is slow, which is supposedly network traffic <a href="https://github.com/golang/dep/blob/c8be449181dadcb01c9118a7c7b592693c82776f/docs/failure-modes.md#hangs">but it happens to everyone I know with tons of bandwidth</a>. Regardless, I’d like an option to supply a timeout and report an error.</li> <li>Versions and transitive depency conflicts can be a real breaking issue in Go still. So without shading or it’s equivalent two package depending on different versions of a given package can break your build, there are a number or proposals to fix this but we’re not there yet.</li> <li>Dep has some goofy ways it resolves transitive dependencies and you may have to add explicit references to them in your Gopkg.toml file. You can see an example <a href="https://kubernetes.io/blog/2018/01/introducing-client-go-version-6/">here</a> under <strong>Updating dependencies – golang/dep</strong>.</li> </ol> <h3 id="my-advice">My advice</h3> <ul> <li>Avoid hangs by checking in your dependencies directly into your source repository and just using the dependency tool (dep, godep, glide it doesn’t matter) for downloading dependencies.</li> <li>Minimize transitive dependencies by keeping stuff small and using patterns like microservices when your dependency tree conflicts.</li> </ul> <h2 id="gopath">GOPATH</h2> <p>Something that takes some adjustment is you check out all your source code in one directory with one path (by default ~/go/src ) and include the path to the source tree to where you check out. Example:</p> <ol> <li>I want to use a package I found on github called jim/awesomeness</li> <li>I have to go to ~/go/src and mkdir -p github.com/jim</li> <li>cd into that and clone the package.</li> <li>When I reference the package in my source file it’ll be literally importing github.com/jim/awesomeness</li> </ol> <p>A better guide to GOPATH and packages is <a href="https://thenewstack.io/understanding-golang-packages/">here</a>.</p> <h3 id="my-advice-1">My advice</h3> <p>Don’t fight it, it’s actually not so bad once you embrace it.</p> <h2 id="code-structure">Code structure</h2> <p>This is a hot topic and there are a few standards for the right way to structure you code from projects that do “file per class” to giant files with general concept names (think like types.go and net.go). Also if you’re used to using a lot of sub package you’re gonna to have issues with not being able to compile if for example you have two sub packages reference one another.</p> <h3 id="my-advice-2">My Advice</h3> <p>In the end I was reasonably ok with something like the following:</p> <ul> <li>myproject/bin for generated executables</li> <li>myproject/cmd for command line code</li> <li>myproject/pkg for code related to the package</li> </ul> <p>Now whatever you do is fine, this was just a common idiom I saw, but it wasn’t remotely all projects. I also had some luck with just jamming everything into the top level of the package and keeping packages small (and making new packages for common code that is used in several places in the code base). If I ever return to using Go for any reason I will probably just jam everything into the top level directory.</p> <h2 id="debugging">Debugging</h2> <p>No debugger! There are some projects attempting to add one but Rob Pike finds them a crutch.</p> <h3 id="my-advice-3">My Advice</h3> <p>Lots of unit tests and print statements.</p> <h2 id="no-generics">No generics</h2> <p>Sorta self explanatory and it causes you a lot of pain when you’re used to reaching for these.</p> <h3 id="my-advice-4">My advice</h3> <p>Look at the code generation support which uses pragmas, this is not exactly the same as having generics but if you have some code that has a lot of boiler plate without them this is a valid alternative. See this official <a href="https://blog.golang.org/generate">Go Blog post</a> for more details.</p> <p>If you don’t want to use generation you really only have reflection left as a valid tool, which comes with all of it’s lack of speed and type safety.</p> <h2 id="cross-compiling">Cross compiling</h2> <p>If you have certain features or dependencies you may find you cannot take advantage of one of Go’s better features cross compilation.</p> <p>I ran into this when using the confluent-go/kafka library which depends on the C librdkafka library. It basically meant I had to do all my development in a Linux VM because almost all our packages relied on this.</p> <h3 id="my-advice-5">My Advice</h3> <p>Avoid C dependencies at all costs.</p> <h2 id="error-handling">Error handling</h2> <p>Go error handling is not exception base but return based, and it’s got a lot of common idioms around it:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>myValue, err := doThing() if err != nil { return -1, fmt.Errorf(“unable to doThing %v”, err) } </code></pre></div></div> <p>Needless to say this can get very wordy when dealing with deeply nested exceptions or when you’re interacting a lot with external systems. It is definitely a mind shift if you’re used to the throwing exceptions wherever and have one single place to catch all exceptions where they’re handled appropriately.</p> <h3 id="my-advice-6">My Advice</h3> <p>I’ll be honest I never totally made my peace with this. I had good training from experienced opensource contributors to major Go projects, read all the right blog posts, definitely felt like I’d heard enough from the community on why the current state of Go error handling was great in their opinions, but the lack of stack traces was a deal breaker for me.</p> <p>On the positive side, I found Dave Cheney’s advice on error handling to be the most practical and he wrote <a href="https://github.com/pkg/errors">a package</a> containing a lot of that advice, we found it invaluable as it provided those stack traces we all missed but you had to remember to use it.</p> <h2 id="summary">Summary</h2> <p>A lot of people really love Go and are very productive with it, I just was never one of those people and that’s ok. However, I think the advice in this post is reasonably sound and uncontroversial. So, if you find yourself needing to write some code it Go, give this guide a quick perusal and you’ll waste a lot less time than I did getting productive in developing applications in Go.</p> Raspberry Pi Kubernetes Cluster - Part 2 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2/ Jason Meridth urn:uuid:0aef121f-48bd-476f-e09d-4ca0aa2ac602 Thu, 03 May 2018 02:13:07 +0000 <p>Howdy again.</p> <p>Alright, my 8 port switch showed up so I was able to connect my raspberry 3B+ boards to my home network. I plugged it in with 6 1ft CAT5 cables I had in my catch-all box that all of us nerds have. I’d highly suggest flexible CAT 6 cables instead if you can get them, like <a href="https://www.amazon.com/Cat-Ethernet-Cable-Black-Connectors/dp/B01IQWGKQ6">here</a>. I ordered them and they showed up before I finished this post, so I am using the CAT6 cables.</p> <!--more--> <p>The IP addresses they will receive initialy from my home router via DHCP can be determined with nmap. Lets imagine my subnet is 192.168.1.0/24.</p> <p>Once I got them on the network I did the following:</p> <script src="https://gist.github.com/64e7b08729ffe779f77a7bda0221c6e9.js"> </script> <h3 id="install-raspberrian-os-on-sd-cards">Install Raspberrian OS On SD Cards</h3> <p>You can get the Raspberry Pi Stretch Lite OS from <a href="https://www.raspberrypi.org/downloads/raspbian/">here</a>.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/raspberry_pi_stretch_lite.png" alt="Raspberry Pi Stretch Lite" /></p> <p>Then use the <a href="https://etcher.io/">Etcher</a> tool to install it to each of the 6 SD cards.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/etcher.gif" alt="Etcher" /></p> <h4 id="important">IMPORTANT</h4> <p>Before putting the cards into the Raspberry Pis you need to add a <code class="highlighter-rouge">ssh</code> folder to the root of the SD cards. This will allow you to ssh to each Raspberry Pi with default credentials (username: <code class="highlighter-rouge">pi</code> and password <code class="highlighter-rouge">raspberry</code>). Example: <code class="highlighter-rouge">ssh pi@raspberry_pi_ip</code> where <code class="highlighter-rouge">raspberry_ip</code> is obtained from the nmap command above.</p> <p>Next post will be setting up kubernetes. Thank you for reading.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2/">Raspberry Pi Kubernetes Cluster - Part 2</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on May 02, 2018.</p> Multi-Environment Deployments with React https://derikwhittaker.blog/2018/04/10/multi-environment-deployments-with-react/ Maintainer of Code, pusher of bits… urn:uuid:4c0ae985-09ac-6d2e-0429-addea1632ea3 Tue, 10 Apr 2018 12:54:17 +0000 If you are using Create-React-App to scaffold your react application there is built in support for changing environment variables based on the NODE_ENV values, this is done by using .env files.  In short this process works by having a .env, .env.production, .env.development set of files.  When you run/build your application CRA will set the NODE_ENV value &#8230; <p><a href="https://derikwhittaker.blog/2018/04/10/multi-environment-deployments-with-react/" class="more-link">Continue reading <span class="screen-reader-text">Multi-Environment Deployments with&#160;React</span></a></p> <p>If you are using <a href="https://github.com/facebook/create-react-app" target="_blank" rel="noopener">Create-React-App</a> to scaffold your react application there is <a href="https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-development-environment-variables-in-env" target="_blank" rel="noopener">built in support</a> for changing environment variables based on the NODE_ENV values, this is done by using .env files.  In short this process works by having a .env, .env.production, .env.development set of files.  When you run/build your application <a href="https://github.com/facebook/create-react-app" target="_blank" rel="noopener">CRA</a> will set the NODE_ENV value to either <code>development</code> or <code>production</code> and based on these values the correct .env file will be used.</p> <p>This works great, when you have a simple deploy setup. But many times in enterprise level applications you need support for more than just 2 environments, many times it is 3-4 environments.  Common logic would suggest that you can accomplish this via the built in mechanism by having additional .env files and changing the NODE_ENV value to the value you care about.  However, CRA does not support this with doing an <code>eject</code>, which will eject all the default conventions and leave it to you to configure your React application.  Maybe this is a good idea, but in my case ejecting was not something I wanted to do.</p> <p>Because I did not want to do an <code>eject</code> I needed to find another solution, and after a fair amount of searching I found a solution that seems to work for me and my needs and is about the amount of effort as I wanted <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" alt=" Raspberry Pi Kubernetes Cluster - Part 1 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1/ Jason Meridth urn:uuid:bd3470f6-97d5-5028-cf12-0751f90915c3 Sat, 07 Apr 2018 14:01:00 +0000 <p>Howdy</p> <p>This is going to be the first post about my setup of a Raspberry Pi Kubernetes Cluster. I saw a post by <a href="https://harthoover.com/kubernetes-1.9-on-a-raspberry-pi-cluster/">Hart Hoover</a> and it finally motivated me to purchase his “grocery list” and do this finally. I’ve been using <a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> for local Kubernetes testing but it doesn’t give you multi-host testing abilities. I’ve also been wanting to get deeper into my Raspberry Pi knowledge. Lots of learning and winning.</p> <p>The items I bought were:</p> <ul> <li>Six <a href="https://smile.amazon.com/dp/B07BFH96M3">Raspberry Pi 3 Model B+ Motherboards</a></li> <li>Six <a href="https://smile.amazon.com/gp/product/B010Q57T02/">SanDisk Ultra 32GB microSDHC UHS-I Card with Adapter, Grey/Red, Standard Packaging (SDSQUNC-032G-GN6MA)</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B011KLFERG/ref=oh_aui_detailpage_o02_s01?ie=UTF8&amp;psc=1">Sabrent 6-Pack 22AWG Premium 3ft Micro USB Cables High Speed USB 2.0 A Male to Micro B Sync and Charge Cables Black CB-UM63</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B01L0KN8OS/ref=oh_aui_detailpage_o02_s01?ie=UTF8&amp;psc=1">AmazonBasics 6-Port USB Wall Charger (60-Watt) - Black</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B01D9130QC/ref=oh_aui_detailpage_o02_s00?ie=UTF8&amp;psc=1">GeauxRobot Raspberry Pi 3 Model B 6-layer Dog Bone Stack Clear Case Box Enclosure also for Pi 2B B+ A+ B A</a></li> <li>One <a href="http://amzn.to/2gNzLzi">Black Box 8-Port Switch</a></li> </ul> <p>Here is the tweet when it all arrived:</p> <div class="jekyll-twitter-plugin"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">I blame <a href="https://twitter.com/hhoover?ref_src=twsrc%5Etfw">@hhoover</a> ;). I will be building my <a href="https://twitter.com/kubernetesio?ref_src=twsrc%5Etfw">@kubernetesio</a> cluster once the 6pi case shows up next Wednesday. The extra pi is to upgrade my <a href="https://twitter.com/RetroPieProject?ref_src=twsrc%5Etfw">@RetroPieProject</a>. Touch screen is an addition I want to try. Side project here I come. <a href="https://t.co/EebIKbsCeH">pic.twitter.com/EebIKbsCeH</a></p>&mdash; Jason Meridth (@jmeridth) <a href="https://twitter.com/jmeridth/status/980075584725422080?ref_src=twsrc%5Etfw">March 31, 2018</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p>I spent this morning finally putting it together.</p> <p>Here is me getting started on the “dogbone case” to hold all of the Raspberry Pis:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_2.jpg" alt="The layout" /></p> <p>The bottom and one layer above:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_3.jpg" alt="The bottom and one layer above" /></p> <p>And the rest:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_4.jpg" alt="3 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_11.jpg" alt="4 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_12.jpg" alt="5 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_13.jpg" alt="6 Layers and Finished" /></p> <p>Different angles completed:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_14.jpg" alt="Finished Angle 2" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_15.jpg" alt="Finished Angle 3" /></p> <p>And connect the power:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_16.jpg" alt="Power" /></p> <p>Next post will be on getting the 6 sandisk cards ready and putting them in and watching the Raspberry Pis boot up and get a green light. Stay tuned.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1/">Raspberry Pi Kubernetes Cluster - Part 1</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on April 07, 2018.</p> Building AWS Infrastructure with Terraform: S3 Bucket Creation https://derikwhittaker.blog/2018/04/06/building-aws-infrastructure-with-terraform-s3-bucket-creation/ Maintainer of Code, pusher of bits… urn:uuid:cb649524-d882-220f-c253-406a54762705 Fri, 06 Apr 2018 14:28:49 +0000 If you are going to be working with any cloud provider it is highly suggested that you script out the creation/maintenance of your infrastructure.  In the AWS word you can use the native CloudFormation solution, but honestly I find this painful and the docs very lacking.  Personally, I prefer Terraform by Hashicorp.  In my experience &#8230; <p><a href="https://derikwhittaker.blog/2018/04/06/building-aws-infrastructure-with-terraform-s3-bucket-creation/" class="more-link">Continue reading <span class="screen-reader-text">Building AWS Infrastructure with Terraform: S3 Bucket&#160;Creation</span></a></p> <p>If you are going to be working with any cloud provider it is highly suggested that you script out the creation/maintenance of your infrastructure.  In the AWS word you can use the native <a href="https://www.googleadservices.com/pagead/aclk?sa=L&amp;ai=DChcSEwjD-Lry6KXaAhUMuMAKHTB8AYwYABAAGgJpbQ&amp;ohost=www.google.com&amp;cid=CAESQeD2aF3IUBPQj5YF9K0xmz0FNtIhnq3PzYAHFV6dMZVIirR_psuXDSgkzxZ0jXoyWfpECufNNfbp7JzHQ73TTrQH&amp;sig=AOD64_1b_L781SLpKXqLTFFYIk5Zv3BcHA&amp;q=&amp;ved=0ahUKEwi1l7Hy6KXaAhWD24MKHQXSCQ0Q0QwIJw&amp;adurl=" target="_blank" rel="noopener">CloudFormation</a> solution, but honestly I find this painful and the docs very lacking.  Personally, I prefer <a href="https://www.terraform.io/" target="_blank" rel="noopener">Terraform</a> by <a href="https://www.hashicorp.com/" target="_blank" rel="noopener">Hashicorp</a>.  In my experience the simplicity and easy of use, not to mention the stellar documentation make this the product of choice.</p> <p>This is the initial post in what I hope to be a series of post about how to use Terraform to setup/build AWS Infrastructure.</p> <p>Terrform Documentation on S3 Creation -&gt; <a href="https://www.terraform.io/docs/providers/aws/d/s3_bucket.html" target="_blank" rel="noopener">Here</a></p> <p>In this post I will cover 2 things</p> <ol> <li>Basic bucket setup</li> <li>Bucket setup as Static website</li> </ol> <p>Setting up a basic bucket we can use the following</p> <div class="code-snippet"> <pre class="code-content">resource "aws_s3_bucket" "my-bucket" { bucket = "my-bucket" acl = "private" tags { Any_Tag_Name = "Tag value for tracking" } } </pre> </div> <p>When looking at the example above the only 2 values that are required are bucket and acl.</p> <p>I have added the use of Tags to show you can add custom tags to your bucket</p> <p>Another way to setup an S3 bucket is to act as a Static Web Host.   Setting this up takes a bit more configuration, but not a ton.</p> <div class="code-snippet"> <pre class="code-content">resource "aws_s3_bucket" "my-website-bucket" { bucket = "my-website-bucket" acl = "public-read" website { index_document = "index.html" error_document = "index.html" } policy = &lt;&lt;POLICY { "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-website-bucket/*" } ] } POLICY tags { Any_Tag_Name = "Tag value for tracking" } } </pre> </div> <p>The example above has 2 things that need to be pointed out.</p> <ol> <li>The website settings.  Make sure you setup the correct pages here for index/error</li> </ol> <p>The Policy settings.  Here I am using just basic policy.  You can of course setup any policy here you want/need.</p> <p>As you can see, setting up S3 buckets is very simple and straight forward.</p> <p><strong><em>*** Reminder: S3 bucket names MUST be globally unique ***</em></strong></p> <p>Till next time,</p> SSH - Too Many Authentication Failures https://blog.jasonmeridth.com/posts/ssh-too-many-authentication-failures/ Jason Meridth urn:uuid:d7fc1034-1798-d75e-1d61-84fac635dda4 Wed, 28 Mar 2018 05:00:00 +0000 <h1 id="problem">Problem</h1> <p>I started seeing this error recently and had brain farted on why.</p> <figure class="highlight"><pre><code class="language-bash" data-lang="bash">Received disconnect from 123.123.132.132: Too many authentication failures <span class="k">for </span>hostname</code></pre></figure> <p>After a bit of googling it came back to me. This is because I’ve loaded too many keys into my ssh-agent locally (<code class="highlighter-rouge">ssh-add</code>). Why did you do that? Well, because it is easier than specifying the <code class="highlighter-rouge">IdentityFile</code> on the cli when trying to connect. But there is a threshhold. This is set by the ssh host by the <code class="highlighter-rouge">MaxAuthTries</code> setting in <code class="highlighter-rouge">/etc/ssh/sshd_config</code>. The default is 6.</p> <h1 id="solution-1">Solution 1</h1> <p>Clean up the keys in your ssh-agent.</p> <p><code class="highlighter-rouge">ssh-add -l</code> lists all the keys you have in your ssh-agent <code class="highlighter-rouge">ssh-add -d key</code> deletes the key from your ssh-agent</p> <h1 id="solution-2">Solution 2</h1> <p>You can solve this on the command line like this:</p> <p><code class="highlighter-rouge">ssh -o IdentitiesOnly=yes -i ~/.ssh/example_rsa foo.example.com</code></p> <p>What is IdentitiesOnly? Explained in Solution 3 below.</p> <h1 id="solution-3-best">Solution 3 (best)</h1> <p>Specifiy, explicitly, which key goes to which host(s) in your <code class="highlighter-rouge">.ssh/config</code> file.</p> <p>You need to configure which key (“IdentityFile”) goes with which domain (or host). You also want to handle the case when the specified key doesn’t work, which would usually be because the public key isn’t in ~/.ssh/authorized_keys on the server. The default is for SSH to then try any other keys it has access to, which takes us back to too many attempts. Setting “IdentitiesOnly” to “yes” tells SSH to only try the specified key and, if that fails, fall through to password authentication (presuming the server allows it).</p> <p>Your ~/.ssh/config would look like:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host *.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/myhost Host secure.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/mysecurehost_rsa Host *.myotherhost.domain IdentitiesOnly yes IdentityFile ~/.ssh/myotherhost_rsa </code></pre></div></div> <p><code class="highlighter-rouge">Host</code> is the host the key can connect to <code class="highlighter-rouge">IdentitiesOnly</code> means only to try <em>this</em> specific key to connect, no others <code class="highlighter-rouge">IdentityFile</code> is the path to the key</p> <p>You can try multiple keys if needed</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host *.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/myhost_rsa IdentityFile ~/.ssh/myhost_dsa </code></pre></div></div> <p>Hope this helps someone else.</p> <p>Cheers!</p> <p><a href="https://blog.jasonmeridth.com/posts/ssh-too-many-authentication-failures/">SSH - Too Many Authentication Failures</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 28, 2018.</p> Clear DNS Cache In Chrome https://blog.jasonmeridth.com/posts/clear-dns-cache-in-chrome/ Jason Meridth urn:uuid:6a2c8c0b-c91b-5f7d-dbc7-8065f0a2f1fd Tue, 27 Mar 2018 20:42:00 +0000 <p>I’m blogging this because I keep forgetting how to do it. Yeah, yeah, I can google it. I run this blog so I know it is always available…..anywho.</p> <p>Go To:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chrome://net-internals/#dns </code></pre></div></div> <p>Click “Clear host cache” button</p> <p><img src="https://blog.jasonmeridth.com/images/clear_dns_cache_in_chrome.png" alt="clear_dns_cache_in_chrome" /></p> <p>Hope this helps someone else.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/clear-dns-cache-in-chrome/">Clear DNS Cache In Chrome</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 27, 2018.</p> Create Docker Container from Errored Container https://blog.jasonmeridth.com/posts/create-docker-container-from-errored-container/ Jason Meridth urn:uuid:33d5a6b5-4c48-ae06-deb6-a505edc6b427 Mon, 26 Mar 2018 03:31:00 +0000 <p>When I’m trying to “dockerize” an applciation I usually have to work through some wonkiness.</p> <p>To diagnose a container that has errored out, I, obviously, look at the logs via <code class="highlighter-rouge">docker logs -f [container_name]</code>. That is sometimes helpful. It will, at minimum tell me where I need to focus on the new container I’m going to create.</p> <p><img src="https://blog.jasonmeridth.com/images/diagnose.jpg" alt="diagnose" /></p> <p>Pre-requisites to being able to build a diagnosis container:</p> <ul> <li>You need to use <code class="highlighter-rouge">CMD</code>, <em>not</em> <code class="highlighter-rouge">ENTRYPOINT</code> in the Dockerfile <ul> <li>with <code class="highlighter-rouge">CMD</code> you’ll be able to start a shell, with <code class="highlighter-rouge">ENTRYPOINT</code> your diagnosis container will just keep trying to run that</li> </ul> </li> </ul> <p>To create a diagnosis container, do the following:</p> <ul> <li>Check your failed container ID by <code class="highlighter-rouge">docker ps -a</code></li> <li>Create docker image form the container with <code class="highlighter-rouge">docker commit</code> <ul> <li>example: <code class="highlighter-rouge">docker commit -m "diagnosis" [failed container id]</code></li> </ul> </li> <li>Check the newly create docker image ID by <code class="highlighter-rouge">docker images</code></li> <li><code class="highlighter-rouge">docker run -it [new container image id] sh</code> <ul> <li>this takes you into a container immediately after the error occurred.</li> </ul> </li> </ul> <p>Hope this helps someone else.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/create-docker-container-from-errored-container/">Create Docker Container from Errored Container</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 25, 2018.</p> Log Early, Log Often… Saved my butt today https://derikwhittaker.blog/2018/03/21/log-early-log-often-saved-my-butt-today/ Maintainer of Code, pusher of bits… urn:uuid:395d9800-e7ce-27fd-3fc1-5e68628bc161 Wed, 21 Mar 2018 13:16:03 +0000 In a prior posting (AWS Lambda:Log Early Log often, Log EVERYTHING) I wrote about the virtues and value about having really in depth logging, especially when working with cloud services.  Well today this logging saved my ASS a ton of detective work. Little Background I have a background job (Lambda that is called on a schedule) &#8230; <p><a href="https://derikwhittaker.blog/2018/03/21/log-early-log-often-saved-my-butt-today/" class="more-link">Continue reading <span class="screen-reader-text">Log Early, Log Often&#8230; Saved my butt&#160;today</span></a></p> <p>In a prior <a href="https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/" target="_blank" rel="noopener">posting (AWS Lambda:Log Early Log often, Log EVERYTHING)</a> I wrote about the virtues and value about having really in depth logging, especially when working with cloud services.  Well today this logging saved my ASS a ton of detective work.</p> <p><strong>Little Background</strong><br /> I have a background job (Lambda that is called on a schedule) to create/update data cache in a <a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener">DynamoDB</a> table.  Basically this job will pull data from one data source and attempt to push it as create/update/delete to our Dynamo table.</p> <p>Today when I was running our application I noticed things were not loading right, in fact I had javascript errors because of null reference errors.  I knew that the issue had to be in our data, but was not sure what was wrong.  If I had not had a ton of logging (debug and info) I would have had to run our code locally and step though/debug code for hundreds of items of data.</p> <p>However, because of in depth logging I was able to quickly go to <a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener">CloudWatch</a> and filter on a few key words and narrow hundreds/thousands of log entries down to 5.  Once I had these 5 entries I was able to expand a few of those entries and found the error within seconds.</p> <p>Total time to find the error was less than 5 minutes and I never opened a code editor or stepped into code.</p> <p>The moral of this story, because I log everything, including data (no PII of course) I was able to quickly find the source of the error.  Now to fix the code&#8230;.</p> <p>Till next time,</p> AWS Lambda: Log early, Log often, Log EVERYTHING https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/ Maintainer of Code, pusher of bits… urn:uuid:6ee7f59b-7f4c-1312-bfff-3f9c46ec8701 Tue, 06 Mar 2018 14:00:58 +0000 In the world of building client/server applications logs are important.  They are helpful when trying to see what is going on in your application.  I have always held the belief  that your logs need to be detailed enough to allow you to determine the WHAT and WHERE without even looking at the code. But lets &#8230; <p><a href="https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/" class="more-link">Continue reading <span class="screen-reader-text">AWS Lambda: Log early, Log often, Log&#160;EVERYTHING</span></a></p> <p>In the world of building client/server applications logs are important.  They are helpful when trying to see what is going on in your application.  I have always held the belief  that your logs need to be detailed enough to allow you to determine the WHAT and WHERE without even looking at the code.</p> <p>But lets be honest, in most cases when building client/server applications logs are an afterthought.  Often this is because you can pretty easily (in most cases) debug your application and step through the code.</p> <p>When building a <a href="https://aws.amazon.com/serverless/" target="_blank" rel="noopener">serverless</a> applications with technologies like <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener">AWS Lambda</a> functions (holds true for Azure Functions as well) your logging game really needs to step up.</p> <p>The reason for this is that you cannot really debug your Lambda in the wild (you can to some degree locally with AWS SAM or the Serverless framework).  Because of this you need produce detailed enough logs to allow you to easily determine the WHAT and WHERE.</p> <p>When I build my serverless functions I have a few guidelines I follow</p> <ol> <li>Info Log calls to methods, output argument data (make sure no <a href="https://en.wikipedia.org/wiki/Personally_identifiable_information" target="_blank" rel="noopener">PII</a>/<a href="https://en.wikipedia.org/wiki/Protected_health_information" target="_blank" rel="noopener">PHI</a>)</li> <li>Error Log any failures (in try/catch or .catch for promises)</li> <li>Debug Log any critical decision points</li> <li>Info Log exit calls at top level methods</li> </ol> <p>I also like to setup a simple and consistent format for my logs.  The example I follow for my Lambda logs is as seen below</p> <div class="code-snippet"> <pre class="code-content">timestamp: [logLevel] : [Class.Method] - message {data points} </pre> </div> <p>I have found that if I follow these general guidelines the pain of determine failure points in serverless environments is heavily reduced.</p> <p>Till next time,</p> Sinon Error: Attempted to wrap undefined property ‘XYZ as function https://derikwhittaker.blog/2018/02/27/sinon-error-attempted-to-wrap-undefined-property-xyz-as-function/ Maintainer of Code, pusher of bits… urn:uuid:b41dbd54-3804-6f6d-23dc-d2a04635033a Tue, 27 Feb 2018 13:45:29 +0000 I ran into a fun little error recently when working on a ReactJs application.  In my application I was using SinonJs to setup some spies on a method, I wanted to capture the input arguments for verification.  However, when I ran my test I received the following error. Attempted to wrap undefined property handlOnAccountFilter as &#8230; <p><a href="https://derikwhittaker.blog/2018/02/27/sinon-error-attempted-to-wrap-undefined-property-xyz-as-function/" class="more-link">Continue reading <span class="screen-reader-text">Sinon Error: Attempted to wrap undefined property &#8216;XYZ as&#160;function</span></a></p> <p>I ran into a fun little error recently when working on a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> application.  In my application I was using <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a> to setup some spies on a method, I wanted to capture the input arguments for verification.  However, when I ran my test I received the following error.</p> <blockquote><p>Attempted to wrap undefined property handlOnAccountFilter as function</p></blockquote> <p>My method under test is setup as such</p> <div class="code-snippet"> <pre class="code-content">handleOnAccountFilter = (filterModel) =&gt; { // logic here } </pre> </div> <p>I was using the above syntax is the <a href="https://github.com/jeffmo/es-class-public-fields" target="_blank" rel="noopener">proposed class property</a> feature, which will automatically bind the <code>this</code> context of the class to my method.</p> <p>My sinon spy is setup as such</p> <div class="code-snippet"> <pre class="code-content">let handleOnAccountFilterSpy = null; beforeEach(() =&gt; { handleOnAccountFilterSpy = sinon.spy(AccountsListingPage.prototype, 'handleOnAccountFilter'); }); afterEach(() =&gt; { handleOnAccountFilterSpy.restore(); }); </pre> </div> <p>Everything looked right, but I was still getting this error.  It turns out that this error is due in part in the way that the Class Property feature implements the handlOnAccountFilter.  When you use this feature the method/property is added to the class as an instance method/property, not as a prototype method/property.  This means that sinon is not able to gain access to it prior to creating an instance of the class.</p> <p>To solve my issue I had to make a change in the implementation to the following</p> <div class="code-snippet"> <pre class="code-content">handleOnAccountFilter(filterModel) { // logic here } </pre> </div> <p>After make the above change I needed to determine how I wanted to bind <code>this</code> to my method (Cory show 5 ways to do this <a href="https://medium.freecodecamp.org/react-binding-patterns-5-approaches-for-handling-this-92c651b5af56" target="_blank" rel="noopener">here</a>).  I chose to bind <code>this</code> inside the constructor as below</p> <div class="code-snippet"> <pre class="code-content">constructor(props){ super(props); this.handleOnAccountFilter = this.handleOnAccountFilter.bind(this); } </pre> </div> <p>I am not a huge fan of having to do this (pun intended), but oh well.  This solved my issues.</p> <p>Till next time</p> Ensuring componentDidMount is not called in Unit Tests https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/ Maintainer of Code, pusher of bits… urn:uuid:da94c1a3-2de4-a90c-97f5-d7361397a33c Thu, 22 Feb 2018 19:45:53 +0000 If you are building a ReactJs you will often times implement componentDidMount on your components.  This is very handy at runtime, but can pose an issue for unit tests. If you are building tests for your React app you are very likely using enzyme to create instances of your component.  The issue is that when enzyme creates &#8230; <p><a href="https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/" class="more-link">Continue reading <span class="screen-reader-text">Ensuring componentDidMount is not called in Unit&#160;Tests</span></a></p> <p>If you are building a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> you will often times implement <code>componentDidMount</code> on your components.  This is very handy at runtime, but can pose an issue for unit tests.</p> <p>If you are building tests for your React app you are very likely using <a href="http://airbnb.io/projects/enzyme/" target="_blank" rel="noopener">enzyme</a> to create instances of your component.  The issue is that when enzyme creates the component it invokes the lifecyle methods, like <code>componentDidMount</code>.  Sometimes we do not want this to be called, but how to suppress this?</p> <p>I have found 2 different ways to suppress/mock <code>componentDidMount</code>.</p> <p>Method one is to redefine <code>componentDidMount</code> on your component for your tests.  This could have interesting side effects so use with caution.</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { beforeAll(() =&gt; { YourComponent.prototype.componentDidMount = () =&gt; { // can omit or add custom logic }; }); }); </pre> </div> <p>Basically above I am just redefining the componentDidMount method on my component.  This works and allows you to have custom logic.  Be aware that when doing above you will have changed the implementation for your component for the lifetime of your test session.</p> <p>Another solution is to use a mocking framework like <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a>.  With Sinon you can stub out the <code>componentDidMount</code> implementation as seen below</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { let componentDidMountStub = null; beforeAll(() =&gt; { componentDidMountStub = sinon.stub(YourComponent.prototype, 'componentDidMount').callsFake(function() { // can omit or add custom logic }); }); afterAll(() =&gt; { componentDidMountStub.restore(); }); }); </pre> </div> <p>Above I am using .stub to redefine the method.  I also added .<a href="http://sinonjs.org/releases/v4.3.0/stubs/" target="_blank" rel="noopener">callsFake</a>() but this can be omitted if you just want to ignore the call.  You will want to make sure you restore your stub via the afterAll, otherwise you will have stubbed out the call for the lifetime of your test session.</p> <p>Till next time,</p> Los Techies Welcomes Derik Whittaker https://lostechies.com/derekgreer/2018/02/21/los-techies-welcomes-derik-whittaker/ Los Techies urn:uuid:adc9a1c8-48ea-3bea-1aa7-320d51db12a1 Wed, 21 Feb 2018 11:00:00 +0000 Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at CodeBetter.com. Welcome, Derik! <p>Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at <a href="http://codebetter.com/">CodeBetter.com</a>. Welcome, Derik!</p> Ditch the Repository Pattern Already https://lostechies.com/derekgreer/2018/02/20/ditch-the-repository-pattern-already/ Los Techies urn:uuid:7fab2063-d833-60ce-9e46-e4a413ec8391 Tue, 20 Feb 2018 21:00:00 +0000 One pattern that still seems particularly common among .Net developers is the Repository pattern. I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago. <p>One pattern that still seems particularly common among .Net developers is the <a href="https://martinfowler.com/eaaCatalog/repository.html">Repository pattern.</a> I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago.</p> <p>I had read several articles over the years advocating abandoning the Repository pattern in favor of other suggested approaches which served as a pebble in my shoe for a few years, but there were a few design principles whose application seemed to keep motivating me to use the pattern.  It wasn’t until a change of tooling and a shift in thinking about how these principles should be applied that I finally felt comfortable ditching the use of repositories, so I thought I’d recount my journey to provide some food for thought for those who still feel compelled to use the pattern.</p> <h2 id="mental-obstacle-1-testing-isolation">Mental Obstacle 1: Testing Isolation</h2> <p>What I remember being the biggest barrier to moving away from the use of repositories was writing tests for components which interacted with the database.  About a year or so before I actually abandoned use of the pattern, I remember trying to stub out a class derived from Entity Framework’s DbContext after reading an anti-repository blog post.  I don’t remember the details now, but I remember it being painful and even exploring use of a 3rd-party library designed to help write tests for components dependent upon Entity Framework.  I gave up after a while, concluding it just wasn’t worth the effort.  It wasn’t as if my previous approach was pain-free, as at that point I was accustomed to stubbing out particularly complex repository method calls, but as with many things we often don’t notice friction to which we’ve become accustomed for one reason or another.  I had assumed that doing all that work to stub out my repositories was what I should be doing.</p> <p>Another principle that I picked up from somewhere (maybe the big <a href="http://xunitpatterns.com/">xUnit Test Patterns</a> book? … I don’t remember) that seemed to keep me bound to my repositories was that <a href="http://aspiringcraftsman.com/2012/04/01/tdd-best-practices-dont-mock-others/">you shouldn’t write tests that depend upon dependencies you don’t own</a>.  I believed at the time that I should be writing tests for Application Layer services (which later morphed into discrete dispatched command handlers) and the idea of stubbing out either NHIbernate or Entity Framework violated my sensibilities.</p> <h2 id="mental-obstacle-2-the-dependency-inversion-principle-adherence">Mental Obstacle 2: The Dependency Inversion Principle Adherence</h2> <p>The Dependency Inversion Principle seems to be a source of confusion for many which stems in part from the similarity of wording with the practice of <a href="https://lostechies.com/derickbailey/2011/09/22/dependency-injection-is-not-the-same-as-the-dependency-inversion-principle/">Dependency Injection</a> as well as from the fact that the pattern’s formal definition reflects the platform from whence the principle was conceived (i.e. C++).  One might say that the abstract definition of the Dependency Inversion Principle was too dependent upon the details of its origin (ba dum tss).  I’ve written about the principle a few times (perhaps my most succinct being <a href="https://stackoverflow.com/a/1113937/1219618">this Stack Overflow answer</a>), but put simply, the Dependency Inversion Principle has at its primary goal the decoupling of the portions of your application which define <i>policy</i> from the portions which define <i>implementation</i>.  That is to say, this principle seeks to keep the portions of your application which govern what your application does (e.g. workflow, business logic, etc.) from being tightly coupled to the portions of your application which govern the low level details of how it gets done (e.g. persistence to an Sql Server database, use of Redis for caching, etc.).</p> <p>A good example of a violation of this principle, which I recall from my NHibernate days, was that once upon a time NHibernate was tightly coupled to log4net.  This was later corrected, but at one time the NHibernate assembly had a hard dependency on log4net.  You could use a different logging library for your own code if you wanted, and you could use binding redirects to use a different version of log4net if you wanted, but at one time if you had a dependency on NHibernate then you had to deploy the log4net library.  I think this went unnoticed by many due to the fact that most developers who used NHibernate also used log4net.</p> <p>When I first learned about the principle, I immediately recognized that it seemed to have limited advertized value for most business applications in light of what Udi Dahan labeled<a href="http://udidahan.com/2009/06/07/the-fallacy-of-reuse/"> The Fallacy Of ReUse</a>.  That is to say, <i>properly understood</i>, the Dependency Inversion Principle has as its primary goal the reuse of components and keeping those components decoupled from dependencies which would keep them from being easily reused with other implementation components, but your application and business logic isn’t something that is likely to ever be reused in a different context.  The take away from that is basically that the advertized value of adhering to the Dependency Inversion Principle is really more applicable to libraries like NHibernate, Automapper, etc. and not so much to that workflow your team built for Acme Inc.’s distribution system.  Nevertheless, the Dependency Inversion Principle had a practical value of implementing an architecture style Jeffrey Palermo labeled <a href="http://jeffreypalermo.com/blog/the-onion-architecture-part-1/">the Onion Architecture.</a> Specifically, in contrast to <a href="https://msdn.microsoft.com/en-us/library/ff650258.aspx"> traditional 3-layered architecture models</a> where UI, Business, and Data Access layers precluded using something like <a href="https://msdn.microsoft.com/en-us/library/ff648105.aspx?f=255&amp;MSPPError=-2147217396">Data Access Logic Components</a> to encapsulate an ORM to map data directly to entities within the Business Layer, inverting the dependencies between the Business Layer and the Data Access layer provided the ability for the application to interact with the database while also <i>seemingly </i>abstracting away the details of the data access technology used.</p> <p>While I always saw the fallacy in strictly trying to apply the Dependency Inversion Principle to invert the implementation details of how I got my data from my application layer so that I’d someday be able to use the application in a completely different context, it seemed the academically astute and in vogue way of doing Domain-driven Design at the time, seemed consistent with the GoF’s advice to program to an interface rather than an implementation, and provided an easier way to write isolation tests than trying to partially stub out ORM types.</p> <h2 id="the-catalyst">The Catalyst</h2> <p>For the longest time, I resisted using Entity Framework.  I had become fairly proficient at using NHibernate and I just saw it as plain stupid to use a framework that was years behind NHibernate in features and maturity, especially when it had such a steep learning curve.  A combination of things happened, though.  A lot of the NHibernate supporters (like many within the Alt.Net crowd) moved on to other platforms like Ruby and Node; anything with Microsoft’s name on it eventually seems to gain market share whether it’s better or not; and Entity Framework eventually did seem to mostly catch up with NHibernate in features, and surpassed it in some areas. So, eventually I found it impossible to avoid using Entity Framework which led to me trying to apply the same patterns I’d used before with this newer-to-me framework.</p> <p>To be honest, everything mostly worked, especially for the really simple stuff.  Eventually, though, I began to see little ways I had to modify my abstraction to accommodate differences in how Entity Framework did things from how NHibernate did things.  What I discovered was that, while my repositories allowed my application code to be physically decoupled from the ORM, the way I was using the repositories was in small ways semantically coupled to the framework.  I wish I had kept some sort of record every time I ran into something, as the only real thing I can recall now were motivations with certain design approaches to expose the SaveChanges method for <a href="https://lostechies.com/derekgreer/2015/11/01/survey-of-entity-framework-unit-of-work-patterns/"> Unit of Work implementations</a> I don’t want to make more of the semantic coupling argument against repositories than it’s worth, but observing little places where <a href="https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/">my abstractions were leaking</a>, combined with the pebble in my shoe of developers who I felt were far better than me were saying I shouldn’t use them lead me to begin rethinking things.</p> <h2 id="more-effective-testing-strategies">More Effective Testing Strategies</h2> <p>It was actually a few years before I stopped using repositories that I stopped stubbing out repositories.  Around 2010, I learned that you can use Test-Driven Development to achieve 100% test coverage for the code for which you’re responsible, but when you plug your code in for the first time with that team that wasn’t designing to the same specification and not writing any tests at all that things may not work.  It was then that I got turned on to Acceptance Test Driven Development.  What I found was that writing high-level subcutaneous tests (i.e. skipping the UI layer, but otherwise end-to-end) was overall easier, was possible to align with acceptance criteria contained within a user story, provided more assurance that everything worked as a whole, and was easier to get teams on board with.  Later on, I surmised that I really shouldn’t have been writing isolation tests for components which, for the most part, are just specialized facades anyway.  All an isolation test for a facade really says is “did I delegate this operation correctly” and if you’re not careful you can end up just writing a whole bunch of tests that basically just validate whether you correctly configured your mocking library.</p> <p>So, by the time I started rethinking my use of repositories, I had long since stopped using them for test isolation.</p> <h2 id="taking-the-plunge">Taking the Plunge</h2> <p>It was actually about a year after I had become convinced that repositories were unnecessary, useless abstractions that I started working with a new codebase I had the opportunity to steer.  Once I eliminated them from the equation, everything got so much simpler.   Having been repository-free for about two years now, I think I’d have a hard time joining a team that had an affinity for them.</p> <h2 id="conclusion">Conclusion</h2> <p>If you’re still using repositories and you don’t have some other hangup you still need to get over like writing unit tests for your controllers or application services then give the repository-free lifestyle a try.  I bet you’ll love it.</p> Using Manual Mocks to test the AWS SDK with Jest https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:3a424860-3707-7327-2bb1-a60b9f3be47d Tue, 20 Feb 2018 13:56:45 +0000 Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests &#8230; <p><a href="https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Using Manual Mocks to test the AWS SDK with&#160;Jest</span></a></p> <p>Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests not unit tests.</p> <p>If you are using <a href="http://bit.ly/jest-get-started" target="_blank" rel="noopener">Jest</a>, one solution is utilize the built in support for <a href="http://bit.ly/jest-manual-mocks" target="_blank" rel="noopener">manual mocks.</a>  I have found the usage of manual mocks invaluable while testing 3rd party API&#8217;s such as the AWS.  Keep in mind just because I am using manual mocks this will remove the need for using libraries like <a href="http://bit.ly/sinon-js" target="_blank" rel="noopener">SinonJs</a> (a JavaScript framework for creating stubs/mocks/spies).</p> <p>The way that manual mocks work in Jest is as follows (from the Jest website&#8217;s documentation).</p> <blockquote><p><em>Manual mocks are defined by writing a module in a <code>__mocks__/</code> subdirectory immediately adjacent to the module. For example, to mock a module called <code>user</code> in the <code>models</code> directory, create a file called <code>user.js</code> and put it in the <code>models/__mocks__</code> directory. Note that the <code>__mocks__</code> folder is case-sensitive, so naming the directory <code>__MOCKS__</code> will break on some systems. If the module you are mocking is a node module (eg: <code>fs</code>), the mock should be placed in the <code>__mocks__</code> directory adjacent to <code>node_modules</code> (unless you configured <a href="https://facebook.github.io/jest/docs/en/configuration.html#roots-array-string"><code>roots</code></a> to point to a folder other than the project root).</em></p></blockquote> <p>In my case I want to mock out the usage of the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">AWS-SDK</a> for <a href="http://bit.ly/aws-sdk-node" target="_blank" rel="noopener">Node</a>.</p> <p>To do this I created a __mocks__ folder at the root of my solution.  I then created a <a href="http://bit.ly/gist-aws-sdk-js" target="_blank" rel="noopener">aws-sdk.js</a> file inside this folder.</p> <p>Now that I have my mocks folder created with a aws-sdk.js file I am able to consume my manual mock in my jest test by simply referencing the aws-sdk via a <code>require('aws-sdk')</code> command.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('./aws-sdk'); </pre> </div> <p>With declaration of AWS above my code is able to a use the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">NPM </a>package during normal usage, or my aws-sdk.js mock when running under the Jest context.</p> <p>Below is a small sample of the code I have inside my aws-sdk.js file for my manual mock.</p> <div class="code-snippet"> <pre class="code-content">const stubs = require('./aws-stubs'); const AWS = {}; // This here is to allow/prevent runtime errors if you are using // AWS.config to do some runtime configuration of the library. // If you do not need any runtime configuration you can omit this. AWS.config = { setPromisesDependency: (arg) =&gt; {} }; AWS.S3 = function() { } // Because I care about using the S3 service's which are part of the SDK // I need to setup the correct identifier. // AWS.S3.prototype = { ...AWS.S3.prototype, // Stub for the listObjectsV2 method in the sdk listObjectsV2(params){ const stubPromise = new Promise((resolve, reject) =&gt; { // pulling in stub data from an external file to remove the noise // from this file. See the top line for how to pull this in resolve(stubs.listObjects); }); return { promise: () =&gt; { return stubPromise; } } } }; // Export my AWS function so it can be referenced via requires module.exports = AWS; </pre> </div> <p>A few things to point out in the code above.</p> <ol> <li>I chose to use the <a href="http://bit.ly/sdk-javascript-promises" target="_blank" rel="noopener">promise</a>s implementation of the listObjectsV2.  Because of this I need to return a promise method as my result on my listObjectsV2 function.  I am sure there are other ways to accomplish this, but this worked and is pretty easy.</li> <li>My function is returning stub data, but this data is described in a separate file called aws-stubs.js which sites along side of my aws-sdk.js file.  I went this route to remove the noise of having the stub data inside my aws-adk file.  You can see a full example of this <a href="http://bit.ly/gist-aws-stub-data" target="_blank" rel="noopener">here</a>.</li> </ol> <p>Now that I have everything setup my tests will no longer attempt to hit the actually aws-sdk, but when running in non-test mode they will.</p> <p>Till next time,</p> Configure Visual Studio Code to debug Jest Tests https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/ Maintainer of Code, pusher of bits… urn:uuid:31928626-b984-35f6-bf96-5bfb71e16208 Fri, 16 Feb 2018 21:33:03 +0000 If you have not given Visual Studio Code a spin you really should, especially if  you are doing web/javascript/Node development. One super awesome feature of VS Code is the ability to easily configure the ability to debug your Jest (should work just fine with other JavaScript testing frameworks) tests.  I have found that most of &#8230; <p><a href="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/" class="more-link">Continue reading <span class="screen-reader-text">Configure Visual Studio Code to debug Jest&#160;Tests</span></a></p> <p>If you have not given <a href="https://code.visualstudio.com/" target="_blank" rel="noopener">Visual Studio Code</a> a spin you really should, especially if  you are doing web/javascript/Node development.</p> <p>One super awesome feature of VS Code is the ability to easily configure the ability to debug your <a href="https://facebook.github.io/jest/" target="_blank" rel="noopener">Jest </a>(should work just fine with other JavaScript testing frameworks) tests.  I have found that most of the time I do not need to actually step into the debugger when writing tests, but there are times that using <code>console.log</code> is just too much friction and I want to step into the debugger.</p> <p>So how do we configure VS Code?</p> <p>First you  will need to install the <a href="https://www.npmjs.com/package/jest-cli" target="_blank" rel="noopener">Jest-Cli</a> NPM package (I am assuming you already have Jest setup to run your tests, if you do not please read the <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Getting-Started</a> docs).  If you fail to do this step you will get the following error in Code when you try to run the debugger.</p> <p><img data-attachment-id="78" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jestcli/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" data-orig-size="702,75" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestCLI" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=640" class="alignnone size-full wp-image-78" src="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" alt="JestCLI" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640 640w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=300 300w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png 702w" sizes="(max-width: 640px) 100vw, 640px" /></p> <p>After you have Jest-Cli installed you will need to configure VS Code for debugging.  To do this open up the configuration by clicking Debug -&gt; Open Configurations.  This will open up a file called launch.json.</p> <p>Once launch.json is open add the following configuration</p> <div class="code-snippet"> <pre class="code-content"> { "name": "Jest Tests", "type": "node", "request": "launch", "program": "${workspaceRoot}/node_modules/jest-cli/bin/jest.js", "stopOnEntry": false, "args": ["--runInBand"], "cwd": "${workspaceRoot}", "preLaunchTask": null, "runtimeExecutable": null, "runtimeArgs": [ "--nolazy" ], "env": { "NODE_ENV": "development" }, "console": "internalConsole", "sourceMaps": false, "outFiles": [] } </pre> </div> <p>Here is a gist of a working <a href="https://gist.github.com/derikwhittaker/331d4a5befddf7fc6b2599f1ada5d866" target="_blank" rel="noopener">launch.json</a> file.</p> <p>After you save the file you are almost ready to start your debugging.</p> <p>Before you can debug you will want to open the debug menu (the bug icon on the left toolbar).   This will show a drop down menu with different configurations.  Make sure &#8216;Jest Test&#8217; is selected.</p> <p><img data-attachment-id="79" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jesttest/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" data-orig-size="240,65" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestTest" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" class="alignnone size-full wp-image-79" src="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" alt="JestTest" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png 240w, https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=150 150w" sizes="(max-width: 240px) 100vw, 240px" /></p> <p>If you have this setup correctly you should be able to set breakpoints and hit F5.</p> <p>Till next time,</p> On Migrating Los Techies to Github Pages https://lostechies.com/derekgreer/2018/02/16/on-migrating-lostechies-to-github-pages/ Los Techies urn:uuid:74de4506-44e0-f605-61cb-8ffe972f6787 Fri, 16 Feb 2018 20:00:00 +0000 We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available here that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation. <p>We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available <a href="https://girliemac.com/blog/2013/12/27/wordpress-to-jekyll/">here</a> that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation.</p> <h2 id="step-1-exporting-content">Step 1: Exporting Content</h2> <p>Having recently migrated <a href="http://aspiringcraftsman.com">my personal blog</a> from WordPress to Github Pages using the aforementioned guide, I thought the process of doing the same for Los Techies would be relatively easy. Unfortunately, due to the fact that we had a woefully out-of-date installation of WordPress, migrating Los Techies proved to be a bit problematic. First, the WordPress to Jekyll Exporter plugin wasn’t compatible with our version of WordPress. Additionally, our installation of WordPress couldn’t be upgraded in place for various reasons. As a result, I ended up taking the rather labor-intensive path of exporting each author’s content using the default WordPress XML export and then, for each author, importing into an up-to-date installation of WordPress using the hosting site with which I previously hosting my personal blog, exporting the posts using the Jekyll Exporter plugin, and then deleting the posts in preparation for the next iteration. This resulted in a collection of zipped, mostly ready posts for each author.</p> <h2 id="step-2-configuring-authors">Step 2: Configuring Authors</h2> <p>Our previous platform utilized the multi-site features of WordPress to facilitate a single site with multiple contributors. By default, Jekyll looks for content within a special folder in the root of the site named _posts, but there are several issues with trying to represent multiple contributors within the _posts folder. Fortunately Jekyll has a feature called Collections which allows you to set up groups of posts which can each have their own associated configuration properties. Once each of the author’s posts were copied to corresponding collection folders, a series of scripts were written to create author-specific index.html, archive.html, and tags.html files which are used by a custom post layout. Additionally, due to the way the WordPress content was exported, the permalinks generated for each post did not reflect the author’s subdirectory, so another script was written to strip out all the generated permalinks.</p> <h2 id="step-3-correcting-liquid-errors">Step 3: Correcting Liquid Errors</h2> <p>Jekyll uses a language called Liquid as its templating engine. Once all the content was in place, all posts which contained double curly braces were interpreted as Liquid commands which ended up breaking the build process. For that, each offending post had to be edited to wrap the content in Liquid directives {% raw %} … {% endraw %} to keep the content from being interpreted by the Liquid parser. Additionally, there were a few other odd things which were causing issues (such as posts with non-breaking space characters) for which more scripts were written to modify the posts to non-offending content.</p> <h2 id="step-4-enabling-disqus">Step 4: Enabling Disqus</h2> <p>The next step was to get Disqus comments working for the posts. By default, Disqus will use the page URL as the page identifier, so as long as the paths match then enabling Disqus should just work. The WordPress Disqus plugin we were using utilized a unique post id and guid as the Disqus page identifier, so the Disqus javascript had to be configured to use these properties. These values were preserved by the Jekyll exporter, but unfortunately the generated id property in the Jekyll front matter was getting internally overridden by Jekyll so another script had to be written to modify all the posts to rename the properties used for these values. Properties were added to the Collection configuration in the main _config.yml to designate the Disqus shortname for each author and allow people to toggle whether disqus was enabled or disabled for their posts.</p> <h2 id="step-5-converting-gists">Step 5: Converting Gists</h2> <p>Many authors at Los Techies used a Gist WordPress plugin to embed code samples within their posts. Github Pages supports a jekyll-gist plugin, so another script was written to modify all the posts to use Liquid syntax to denote the gists. This mostly worked, but there were still a number of posts which had to be manually edited to deal with different ways people were denoting their gists. In retrospect, it would have been better to use JavaScript rather than the Jekyll gist plugin due to the size of the Los Techies site. Every plugin you use adds time to the overall build process which can become problematic as we’ll touch on next.</p> <h2 id="step-6-excessive-build-time-mitigation">Step 6: Excessive Build-time Mitigation</h2> <p>The first iteration of the conversion used the Liquid syntax for generating the sidebar content which lists recent site-wide posts, recent author-specific posts, and the list of contributing authors. This resulted in extremely long build times, but it worked and who cares once the site is rendered, right? Well, what I found out was that Github has a hard cut off of 10 minutes for Jekyll site builds. If your site doesn’t build within 10 minutes, the process gets killed. At first I thought “Oh no! After all this effort, Github just isn’t going to support a site our size!” I then realized that rather than having every page loop over all the content, I could create a Jekyll template to generate JSON content one time and then use JavaScript to retrieve the content and dynamically generate the sidebar DOM elements. This sped up the build significantly, taking the build from close to a half-hour to just a few minutes.</p> <h2 id="step-8-converting-wordpress-uploaded-content">Step 8: Converting WordPress Uploaded Content</h2> <p>Another headache that presented itself is how WordPress represented uploaded content. Everything that anyone had ever uploaded to the site for images and downloads used within their posts were stored in a cryptic folder structure. Each folder had to be interrogated to see which files contained therein matched what author, the folder structure had to be reworked to accommodate the nature of the Jekyll site, and more scripts had to be written to edit everyone’s posts to change paths to the new content. Of course, the scripts only worked for about 95% of the posts, a number of posts had to be edited manually to fix things like non-printable characters being used in file names, etc.</p> <h2 id="step-9-handling-redirects">Step 9: Handling Redirects</h2> <p>The final step to get the initial version of the conversion complete was to handle redirects which were formally being handled by .httpacess. The Los Techies site started off using Community Server prior to migrating to WordPress and redirects were set up using .httpaccess to maintain the paths to all the previous content locations. Github Pages doesn’t support .httpaccess, but it does support a Jekyll redirect plugin. Unfortunately, it requires adding a redirect property to each post requiring a redirect and we had several thousand, so I had to write another script to read the .httpaccess file and figure out which post went with each line. Another unfortunate aspect of using the Jekyll redirect plugin is that it adds overhead to the build time which, as discussed earlier, can become an issue.</p> <h2 id="step-10-enabling-aggregation">Step 10: Enabling Aggregation</h2> <p>Once the conversion was complete, I decided to dedicate some time to figuring out how we might be able to add the ability to aggregate posts from external feeds. The first step to this was finding a service that could aggregate feeds together. You might think there would be a number of things that do this, and while I did find at least a half-dozen services, there were only a couple I found that allowed you to maintain a single feed and add/remove new feeds while preserving the aggregated feed. Most seemed to only allow you to do a one-time aggregation. For this I settled on a site named <a href="http://feed.informer.com">feed.informer.com</a>. Next, I replaced the landing page with JavaScript that dynamically built the site from the aggregated feed along with replacing the recent author posts section that did the same and a special external template capable of making an individual post look like it’s actually hosted on Los Techies. The final result was a site that displays a mixture of local content along with aggregated content.</p> <h2 id="conclusion">Conclusion</h2> <p>Overall, the conversion was way more work than I anticipated, but I believe worth the effort. The site is now much faster than it used to be and we aren’t having to pay a hosting service to host our site.</p> Going Async with Node AWS SDK with Express https://derikwhittaker.blog/2018/02/13/going-async-with-node-aws-sdk-with-express/ Maintainer of Code, pusher of bits… urn:uuid:d4750cda-8c6e-8b2f-577b-78c746ee6ebd Tue, 13 Feb 2018 13:00:30 +0000 When building applications in Node/Express you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The 'old school' way was to use call backs, which often led to callback hell.  Than came along Promises which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for async/await was introduced and this really has cleaned up and enhanced the readability of your code. <p>When building applications in <a href="https://nodejs.org/en/" target="_blank" rel="noopener">Node</a>/<a href="http://expressjs.com/" target="_blank" rel="noopener">Express </a>you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The &#8216;old school&#8217; way was to use call backs, which often led to <a href="http://callbackhell.com/" target="_blank" rel="noopener">callback hell</a>.  Than came along <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise">Promises</a> which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function" target="_blank" rel="noopener">async</a>/<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await" target="_blank" rel="noopener">await</a> was introduced and this really has cleaned up and enhanced the readability of your code.</p> <p>Having the ability to use async/await is great, and is supported out of the box w/ Express.  But what do you do when you using a library which still wants to use promises or callbacks? The case in point for this article is <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">AWS Node SDK</a>.</p> <p>By default if you read through the AWS SDK documentation the examples lead you to believe that you need to use callbacks when implementing the SDK.  Well this can really lead to some nasty code in the world of Node/Express.  However, as of <a href="https://aws.amazon.com/blogs/developer/support-for-promises-in-the-sdk/" target="_blank" rel="noopener">v2.3.0</a> of the AWS SDK there is support for Promises.  This is much cleaner than using callbacks, but still poses a bit of an issue if you want to use async/await in your Express routes.</p> <p>However, with a bit of work you can get your promise based AWS calls to play nicely with your async/await based Express routes.  Lets take a look at how we can accomplish this.</p> <p>Before you get started I am going to make a few assumptions.</p> <ol> <li>You already have a Node/Express application setup</li> <li>You already have the AWS SDK for Node installed, if not read <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">here</a></li> </ol> <p>The first thing we are going to need to do is add reference to our AWS SDK and configure it to use promises.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('aws-sdk'); AWS.config.setPromisesDependency(null); </pre> </div> <p>After we have our SDK configured we can implement our route handler.  In my example here I am placing all the logic inside my handler.  In a real code base I would suggest better deconstruction of this code into smaller parts.</p> <div class="code-snippet"> <pre class="code-content">const express = require('express'); const router = express.Router(); const s3 = new AWS.S3(); router.get('/myRoute', async (req, res) =&gt; { const controller = new sitesController(); const params = req.params; const params = { Bucket: "bucket_name_here" }; let results = {}; var listPromise = s3.listObjects(params).promise(); listPromise.then((data) =&gt; { results = data; }); await Promise.all([listPromise]); res.json({data: results }) }) module.exports = router; </pre> </div> <p>Lets review the code above and call out a few important items.</p> <p>The first thing to notice is the addition of the <code>async</code> keyword in my route handler.  This is what allows us to use async/await in Node/Express.</p> <p>The next thing to look at is how I am calling the s3.listObjects.  Notice I am <strong>NOT </strong>providing a callback to the method, but instead I am chaining with .promise().  This is what instructs the SDK to use promises vs callbacks.  Once I have my callback I chain a &#8216;then&#8217; in order to handle my response.</p> <p>The last thing to pay attention to is the line with <code>await Promise.All([listPromise]);</code> This is the magic forces our route handler to not return prior to the resolution of all of our Promises.  Without this your call would exit prior to the listObjects call completing.</p> <p>Finally, we are simply returning our data from the listObjects call via <code>res.json</code> call.</p> <p>That&#8217;s it, pretty straight forward, once you learn that the AWS SDK supports something other than callbacks.</p> <p>Till next time,</p> Unable To Access Mysql With Root and No Password After New Install On Ubuntu https://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/ Jason Meridth urn:uuid:f81a51eb-8405-7add-bddb-f805b183347e Wed, 31 Jan 2018 00:13:00 +0000 <p>This bit me in the rear end again today. Had to reinstall mysql-server-5.7 for other reasons.</p> <p>You just installed <code class="highlighter-rouge">mysql-server</code> locally for your development environment on a recent version of Ubuntu (I have 17.10 artful installed). You did it with a blank password for <code class="highlighter-rouge">root</code> user. You type <code class="highlighter-rouge">mysql -u root</code> and you see <code class="highlighter-rouge">Access denied for user 'root'@'localhost'</code>.</p> <p><img src="https://blog.jasonmeridth.com/images/wat.png" alt="wat" /></p> <p>Issue: Because you chose to not have a password for the <code class="highlighter-rouge">root</code> user, the <code class="highlighter-rouge">auth_plugin</code> for my MySQL defaulted to <code class="highlighter-rouge">auth_socket</code>. That means if you type <code class="highlighter-rouge">sudo mysql -u root</code> you will get in. If you don’t, then this is NOT the fix for you.</p> <p>Solution: Change the <code class="highlighter-rouge">auth_plugin</code> to <code class="highlighter-rouge">mysql_native_password</code> so that you can use the root user in the database.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo mysql -u root mysql&gt; USE mysql; mysql&gt; UPDATE user SET plugin='mysql_native_password' WHERE User='root'; mysql&gt; FLUSH PRIVILEGES; mysql&gt; exit; $ sudo systemctl restart mysql $ sudo systemctl status mysql </code></pre></div></div> <p><strong>NB</strong> ALWAYS set a password for mysql-server in staging/production.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/">Unable To Access Mysql With Root and No Password After New Install On Ubuntu</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on January 30, 2018.</p> New Job https://blog.jasonmeridth.com/posts/new-job/ Jason Meridth urn:uuid:102e69a7-2b63-e750-2fa5-f46372d4d7c1 Mon, 08 Jan 2018 18:13:00 +0000 <p>Well, it is a new year and I’ve started a new job. I am now a Senior Software Engineer at <a href="https://truelinkfinancial.com">True Link Financial</a>.</p> <p><img src="https://blog.jasonmeridth.com/images/tllogo.png" alt="true link financial logo" /></p> <p>After interviewing with the co-founders Kai and Claire and their team, I knew I wanted to work here.</p> <p><strong>TL;DR</strong>: True Link: We give elderly and disable (really, anyone) back their financial freedom where they may not usually have it.</p> <p>Longer Version: Imagine you have an elderly family member who may start showing signs of dimensia. You can give them a True Link card and administer their card. You link it to their bank account or another source of funding and you can set limitations on when, where and how the card can be used. The family member feels freedom by not having to continually ask for money but is also protected by scammers and non-friendly people (yep, they exist).</p> <p>The customer service team, the marketing team, the product team, the engineering team and everyone else at True Link are amazing.</p> <p>For any nerd readers, the tech stack is currently Rails, React, AWS, Ansible. We’ll be introducing Docker and Kubernetes soon hopefully, but always ensuring the right tools for the right job.</p> <p>Looking forward to 2018.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/new-job/">New Job</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on January 08, 2018.</p> Hello, React! – A Beginner’s Setup Tutorial https://lostechies.com/derekgreer/2017/05/25/hello-react-a-beginners-setup-tutorial/ Los Techies urn:uuid:896513a4-c41d-c8ea-820b-fbc3e2b5a442 Thu, 25 May 2017 08:00:32 +0000 React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process. <p>React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process.</p> <h2 id="a-simple-tutorial">A Simple Tutorial</h2> <p>This tutorial is merely intended to help walk you through the steps to getting a simple React example up and running. When you’re ready to dive into actually learning the React framework, a great list of tutorials can be found <a href="http://andrewhfarmer.com/getting-started-tutorials/">here.</a></p> <p>There are a several build, transpiler, or bundling tools from which to select when working with React. For this tutorial, we’ll be using be using Node, NPM, Webpack, and Babel.</p> <h2 id="step-1-install-node">Step 1: Install Node</h2> <p>Download and install Node for your target platform. Node distributions can be obtained <a href="https://nodejs.org/en/">here</a>.</p> <h2 id="step-2-create-a-project-folder">Step 2: Create a Project Folder</h2> <p>From a command line prompt, create a folder where you plan to develop your example.</p> <pre>$&gt; mkdir hello-react </pre> <h2 id="step-3-initialize-project">Step 3: Initialize Project</h2> <p>Change directory into the example folder and use the Node Package Manager (npm) to initialize the project:</p> <pre>$&gt; cd hello-react $&gt; npm init --yes </pre> <p>This results in the creation of a package.json file. While not technically necessary for this example, creating this file will allow us to persist our packaging and runtime dependencies.</p> <h2 id="step-4-install-react">Step 4: Install React</h2> <p>React is broken up into a core framework package and a package related to rendering to the Document Object Model (DOM).</p> <p>From the hello-react folder, run the following command to install these packages and add them to your package.json file:</p> <pre>$&gt; npm install --save-dev react react-dom </pre> <h2 id="step-5-install-babel">Step 5: Install Babel</h2> <p>Babel is a transpiler, which is to say it’s a tool from converting one language or language version to another. In our case, we’ll be converting EcmaScript 2015 to EcmaScript 5.</p> <p>From the hello-react folder, run the following command to install babel:</p> <pre>$&gt; npm install --save-dev babel-core </pre> <h2 id="step-6-install-webpack">Step 6: Install Webpack</h2> <p>Webpack is a module bundler. We’ll be using it to package all of our scripts into a single script we’ll include in our example Web page.</p> <p>From the hello-react folder, run the following command to install webpack globally:</p> <pre>$&gt; npm install webpack --global </pre> <h2 id="step-7-install-babel-loader">Step 7: Install Babel Loader</h2> <p>Babel loader is a Webpack plugin for using Babel to transpile scripts during the bundling process.</p> <p>From the hello-react folder, run the following command to install babel loader:</p> <pre>$&gt; npm install --save-dev babel-loader </pre> <h2 id="step-8-install-babel-presets">Step 8: Install Babel Presets</h2> <p>Babel presets are collections of plugins needed to support a given feature. For example, the latest version of babel-preset-es2015 at the time this writing will install 24 plugins which enables Babel to transpile ECMAScript 2015 to ECMAScript 5. We’ll be using presets for ES2015 as well as presets for React. The React presets are primarily needed for processing of <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX</a>.</p> <p>From the hello-react folder, run the following command to install the babel presets for both ES2015 and React:</p> <pre>$&gt; npm install --save-dev babel-preset-es2015 babel-preset-react </pre> <h2 id="step-9-configure-babel">Step 9: Configure Babel</h2> <p>In order to tell Babel which presets we want to use when transpiling our scripts, we need to provide a babel config file.</p> <p>Within the hello-react folder, create a file named .babelrc with the following contents:</p> <pre>{ "presets" : ["es2015", "react"] } </pre> <h2 id="step-10-configure-webpack">Step 10: Configure Webpack</h2> <p>In order to tell Webpack we want to use Babel, where our entry point module is, and where we want the output bundle to be created, we need to create a Webpack config file.</p> <p>Within the hello-react folder, create a file named webpack.config.js with the following contents:</p> <pre>const path = require('path'); module.exports = { entry: './app/index.js', output: { path: path.resolve('dist'), filename: 'index_bundle.js' }, module: { rules: [ { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ } ] } } </pre> <h2 id="step-11-create-a-react-component">Step 11: Create a React Component</h2> <p>For our example, we’ll just be creating a simple component which renders the text “Hello, React!”.</p> <p>First, create an app sub-folder:</p> <pre>$&gt; mkdir app </pre> <p>Next, create a file named app/index.js with the following content:</p> <pre>import React from 'react'; import ReactDOM from 'react-dom'; class HelloWorld extends React.Component { render() { return ( &lt;div&gt; Hello, React! &lt;/div&gt; ) } }; ReactDOM.render(&lt;HelloWorld /&gt;, document.getElementById('root')); </pre> <p>Briefly, this code includes the react and react-dom modules, defines a HelloWorld class which returns an element containing the text “Hello, React!” expressed using <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX syntax</a>, and finally renders an instance of the HelloWorld element (also using JSX syntax) to the DOM.</p> <p>If you’re completely new to React, don’t worry too much about trying to fully understand the code. Once you’ve completed this tutorial and have an example up and running, you can move on to one of the aforementioned tutorials, or work through <a href="https://facebook.github.io/react/docs/hello-world.html">React’s Hello World example</a> to learn more about the syntax used in this example.</p> <div class="note"> <p> Note: In many examples, you will see the following syntax: </p> <pre> var HelloWorld = React.createClass( { render() { return ( &lt;div&gt; Hello, React! &lt;/div&gt; ) } }); </pre> <p> This syntax is how classes were defined in older versions of React and will therefore be what you see in older tutorials. As of React version 15.5.0 use of this syntax will produce the following warning: </p> <p style="color: red"> Warning: HelloWorld: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you&#8217;re not yet ready to migrate, create-react-class is available on npm as a drop-in replacement. </p> </div> <h2 id="step-12-create-a-webpage">Step 12: Create a Webpage</h2> <p>Next, we’ll create a simple html file which includes the bundled output defined in step 10 and declare a &lt;div&gt; element with the id “root” which is used by our react source in step 11 to render our HelloWorld component.</p> <p>Within the hello-react folder, create a file named index.html with the following contents:</p> <pre>&lt;html&gt; &lt;div id="root"&gt;&lt;/div&gt; &lt;script src="./dist/index_bundle.js"&gt;&lt;/script&gt; &lt;/html&gt; </pre> <h2 id="step-13-bundle-the-application">Step 13: Bundle the Application</h2> <p>To convert our app/index.js source to ECMAScript 5 and bundle it with the react and react-dom modules we’ve included, we simply need to execute webpack.</p> <p>Within the hello-react folder, run the following command to create the dist/index_bundle.js file reference by our index.html file:</p> <pre>$&gt; webpack </pre> <h2 id="step-14-run-the-example">Step 14: Run the Example</h2> <p>Using a browser, open up the index.html file. If you’ve followed all the steps correctly, you should see the following text displayed:</p> <pre>Hello, React! </pre> <h2 id="conclusion">Conclusion</h2> <p>Congratulations! After completing this tutorial, you should have a pretty good idea about the steps involved in getting a basic React app up and going. Hopefully this will save some absolute beginners from spending too much time trying to piece these steps together.</p> Up into the Swarm https://lostechies.com/gabrielschenker/2017/04/08/up-into-the-swarm/ Los Techies urn:uuid:844f7b20-25e5-e658-64f4-e4d5f0adf614 Sat, 08 Apr 2017 20:59:26 +0000 Last Thursday evening I had the opportunity to give a presentation at the Docker Meetup in Austin TX about how to containerize a Node JS application and deploy it into a Docker Swarm. I also demonstrated techniques that can be used to reduce friction in the development process when using containers. <p>Last Thursday evening I had the opportunity to give a presentation at the Docker Meetup in Austin TX about how to containerize a Node JS application and deploy it into a Docker Swarm. I also demonstrated techniques that can be used to reduce friction in the development process when using containers.</p> <p>The meeting was recorded but unfortunately sound only is available after approximately 16 minutes. You might want to just scroll forward to this point.</p> <p>Video: <a href="https://youtu.be/g786WiS5O8A">https://youtu.be/g786WiS5O8A</a></p> <p>Slides and code: <a href="https://github.com/gnschenker/pets-node">https://github.com/gnschenker/pets-node</a></p> New Year, New Blog https://lostechies.com/jimmybogard/2017/01/26/new-year-new-blog/ Los Techies urn:uuid:447d30cd-e297-a888-7ccc-08c46f5a1688 Thu, 26 Jan 2017 03:39:05 +0000 One of my resolutions this year was to take ownership of my digital content, and as such, I’ve launched a new blog at jimmybogard.com. I’m keeping all my existing content on Los Techies, where I’ve been humbled to be a part of for the past almost 10 years. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it’s been a great ride. <p>One of my resolutions this year was to take ownership of my digital content, and as such, I’ve launched a new blog at <a href="https://jimmybogard.com/">jimmybogard.com</a>. I’m keeping all my existing content on <a href="https://jimmybogard.lostechies.com/">Los Techies</a>, where I’ve been humbled to be a part of for the past <a href="http://grabbagoft.blogspot.com/2007/11/joining-los-techies.html">almost 10 years</a>. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it’s been a great ride.</p> <p>If you’re still subscribed to my FeedBurner feed – nothing to change, you’ll get everything as it should. If you’re only subscribed to the Los Techies feed…well you’ll need to <a href="http://feeds.feedburner.com/GrabBagOfT">subscribe to my feed</a> now.</p> <p>Big thanks to everyone at Los Techies that’s put up with me over the years, especially our site admin <a href="https://jasonmeridth.com/">Jason</a>, who has become far more knowledgable about WordPress than he ever probably wanted.</p>