Los Techies http://feed.informer.com/digests/ZWDBOR7GBI/feeder Los Techies Respective post owners and feed distributors Thu, 08 Feb 2018 14:40:57 +0000 Feed Informer http://feed.informer.com/ NServiceBus and .NET Core Generic Host https://jimmybogard.com/nservicebus-and-net-core-generic-host/ Jimmy Bogard urn:uuid:347f9fe1-9126-5341-47f9-63aa1ecea54e Mon, 23 Mar 2020 13:19:53 +0000 <p>My current client is using .NET Core 2.x, with plans to upgrade to 3.x next month. As part of that system, we do quite a bit of messaging, with NServiceBus as the tool of choice to help make this easier. To get it working with our .NET Core</p> <p>My current client is using .NET Core 2.x, with plans to upgrade to 3.x next month. As part of that system, we do quite a bit of messaging, with NServiceBus as the tool of choice to help make this easier. To get it working with our .NET Core 2.x applications, we did quite a bit of what I laid out in my <a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">Messaging Endpoints in Azure</a> series.</p><p>Since then, NServiceBus released first-class support for the .NET Core Generic Host, which underwent a fairly large refactoring in the 2.x to 3.0 timeframe. <a href="https://andrewlock.net/ihostingenvironment-vs-ihost-environment-obsolete-types-in-net-core-3/">Andrew Lock's post</a> goes into more detail, but the gist of it is, NServiceBus has first-class support for .NET Core 3.x and later.</p><p>What that means for us is hosting NServiceBus inside a .NET Core application couldn't be easier. The <a href="https://docs.particular.net/nservicebus/hosting/extensions-hosting">NServiceBus.Extensions.Hosting</a> package provides all the integration we need to add a hosted NServiceBus service <em>and</em> integrate with the built-in DI container.</p><h3 id="configuring-nservicebus">Configuring NServiceBus</h3><p>With any kind of hosted .NET Core application (Console, ASP.NET Core, Worker), we just need to add the extensions package:</p><pre><code class="language-xml">&lt;Project Sdk="Microsoft.NET.Sdk.Web"&gt; &lt;PropertyGroup&gt; &lt;TargetFramework&gt;netcoreapp3.1&lt;/TargetFramework&gt; &lt;/PropertyGroup&gt; &lt;ItemGroup&gt; &lt;PackageReference Include="NServiceBus.Extensions.Hosting" Version="1.0.0" /&gt; </code></pre><p>And add the configuration directly off of the host builder:</p><pre><code class="language-csharp">Host.CreateDefaultBuilder(args) .UseNServiceBus(hostBuilderContext =&gt; { var endpointConfiguration = new EndpointConfiguration("WebApplication"); // configure endpoint here return endpointConfiguration; }) .ConfigureWebHostDefaults(webBuilder =&gt; { webBuilder.UseStartup&lt;Startup&gt;(); }); </code></pre><p>Or with a <a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&amp;tabs=visual-studio">Worker SDK</a>:</p><pre><code class="language-xml">&lt;Project Sdk="Microsoft.NET.Sdk.Worker"&gt; &lt;PropertyGroup&gt; &lt;TargetFramework&gt;netcoreapp3.1&lt;/TargetFramework&gt; &lt;/PropertyGroup&gt; &lt;ItemGroup&gt; &lt;PackageReference Include="Microsoft.Extensions.Hosting" Version="3.1.2" /&gt; </code></pre><p>It's really not much different:</p><pre><code class="language-csharp">public static IHostBuilder CreateHostBuilder(string[] args) =&gt; Host.CreateDefaultBuilder(args) .UseNServiceBus(hostBuilderContext =&gt; { var endpointConfiguration = new EndpointConfiguration("WorkerService"); // configure endpoint here return endpointConfiguration; }); </code></pre><p>And our endpoint is up and running.</p><h3 id="logging-and-serialization">Logging and Serialization</h3><p>We're not quite there yet, though. The out-of-the-box serialization is XML (which is fine by me), but many folks prefer JSON. Additionally, the logging support inside of NServiceBus is <em>not</em> currently integrated with this package. For serialization, we can use the new System.Text.Json support instead of Newtonsoft.Json.</p><p>We'll pull in the community packages from <a href="https://github.com/SimonCropp">Simon Cropp</a>:</p><ul><li><a href="https://github.com/NServiceBusExtensions/NServiceBus.Json">NServiceBus.Json</a></li><li><a href="https://github.com/NServiceBusExtensions/NServiceBus.MicrosoftLogging">NServiceBus.MicrosoftLogging.Hosting</a></li></ul><p>With those two packages in place, we can configure our host's serializer and logging:</p><pre><code class="language-csharp">Host.CreateDefaultBuilder(args) .UseMicrosoftLogFactoryLogging() .UseNServiceBus(hostBuilderContext =&gt; { var endpointConfiguration = new EndpointConfiguration("WorkerService"); endpointConfiguration.UseSerialization&lt;SystemJsonSerializer&gt;(); </code></pre><p>We now have integrated logging, hosting, and dependency injection with anything that uses the generic host.</p><h3 id="using-the-logger">Using the logger</h3><p>Now in our handlers, we can add dependencies directly on the Microsoft logger,  <code>ILogger&lt;T&gt;</code>:</p><pre><code class="language-csharp">public class SaySomethingHandler : IHandleMessages&lt;SaySomething&gt; { private readonly ILogger&lt;SaySomethingHandler&gt; _logger; public SaySomethingHandler(ILogger&lt;SaySomethingHandler&gt; logger) =&gt; _logger = logger; public Task Handle(SaySomething message, IMessageHandlerContext context) { _logger.LogInformation("Saying {message}", message.Message); return context.Reply(new SaySomethingResponse { Message = $"Back at ya {message.Message}" }); } }</code></pre><p>And we get an integrated logging experience:</p><pre><code class="language-text">info: NServiceBus.LicenseManager[0] Selected active license from C:\Users\jbogard\AppData\Local\ParticularSoftware\license.xml License Expiration: 2020-06-16 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Development info: Microsoft.Hosting.Lifetime[0] Content root path: C:\Users\jbogard\source\repos\NsbActivities\WorkerService info: WorkerService.SaySomethingHandler[0] Saying Hello World! </code></pre><p>Now with this logging and dependency injection integration, we can use <em>any</em> logger or container that extends the built-in abstractions. My current client (and most) use Serilog, which makes it very easy to plug in to the generic host as well.</p><p>With these packages, we'll be able to <em>delete</em> a lot of infrastructure code that wasn't adding any value, which is always a good thing.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=sqwnUZbrx-c:IOu8kmb1FrI:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=sqwnUZbrx-c:IOu8kmb1FrI:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=sqwnUZbrx-c:IOu8kmb1FrI:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=sqwnUZbrx-c:IOu8kmb1FrI:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=sqwnUZbrx-c:IOu8kmb1FrI:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/sqwnUZbrx-c" height="1" width="1" alt=""/> Avoid In-Memory Databases for Tests https://jimmybogard.com/avoid-in-memory-databases-for-tests/ Jimmy Bogard urn:uuid:3462f902-ee8d-042f-096f-ba77a99f4f8f Wed, 18 Mar 2020 18:25:12 +0000 <p>A <a href="https://github.com/dotnet/efcore/issues/18457">controversial</a> GitHub issue came to my attention a couple of weeks ago around ditching the <a href="https://docs.microsoft.com/en-us/ef/core/providers/in-memory/?tabs=dotnet-core-cli">in-memory provider</a> for Entity Framework Core. This seemed like a no-brainer to me - these database providers are far from trivial to maintain, even for in-memory strategies. It's something our teams learned nearly a</p> <p>A <a href="https://github.com/dotnet/efcore/issues/18457">controversial</a> GitHub issue came to my attention a couple of weeks ago around ditching the <a href="https://docs.microsoft.com/en-us/ef/core/providers/in-memory/?tabs=dotnet-core-cli">in-memory provider</a> for Entity Framework Core. This seemed like a no-brainer to me - these database providers are far from trivial to maintain, even for in-memory strategies. It's something our teams learned nearly a decade ago, that trying to swap out an in-memory strategy for unit testing simply doesn't provide the value folks may hope for.</p><p>It seems rather simple at first - especially in the .NET world and EF Core. EF Core's primary read API is <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/">LINQ</a>. LINQ has two flavors - <code>IEnumerable</code> and <code>IQueryable</code>. With <code>IQueryable</code>, an <code><a href="https://docs.microsoft.com/en-us/dotnet/api/system.linq.iqueryprovider?view=netcore-3.1">IQueryProvider</a></code> translates expression trees to...well whatever makes sense for the underlying store. There's a neat trick that you can do, however, as <code>IEnumerable</code> has a method, <code>AsQueryable</code>, to allow complex expression trees to evaluate directly against an in-memory <code>IEnumerable</code>.</p><p>Thus, in-memory queryables were born. So why not take advantage of this possibility for unit tests? Why not allow us to swap the implementation of some queryable to the in-memory equivalent, and allow us to write unit tests against in-memory stores?</p><p>It all seems so simple, but unfortunately, the devil is in the details.</p><h3 id="simple-ain-t-easy">Simple Ain't Easy</h3><p>LINQ providers aren't easy. They're not even merely difficult, they're some of the <strong>most complex pieces of code you'll see</strong>. Why is that?</p><p>A LINQ provider is a compiler, a lexer, and a parser, but doesn't own any of those pieces. It's a transpiler, but instead of the output being text, it's API calls. You have to do similar operations as an actual compiler, dealing with ASTs, building your own AST (often), and figuring out how to make a very wide and complex API that is all the <code>IQueryable</code> surface area.</p><p>Any ORM maintainer can tell you - the code in the query provider can <em>dwarf</em> the code in the rest of the codebase. This is unlike other ORMs that provide a specialized API or language, such as NHibernate or the MongoDB C# driver.</p><p>LINQ surfaces a great amount of flexibility, but that flexibility makes it quite difficult to translate into SQL or any other database API that's already been specifically designed for <em>that database</em>. We're trying to wrap a fit-for-purpose API with a generic-purpose API, and one that can run either in-memory or translated into a database.</p><p>On top of that, a good deal of the LINQ surface area can't be translated into SQL, and there are Very Important things that don't translate into LINQ. So you have to extend <code>IQueryable</code> to do fun things like:</p><pre><code class="language-csharp">using (var context = new BloggingContext()) { var blogs = context.Blogs .Include(blog =&gt; blog.Posts) .ThenInclude(post =&gt; post.Author) .ThenInclude(author =&gt; author.Photo) .ToList(); }</code></pre><p>Yikes! We also have <code>async</code> in the mix, so now we're at the point where the <code>IQueryable</code> isn't remotely the same for in-memory.</p><p>But that won't stop us from trying!</p><h3 id="pitfalls-of-in-memory">Pitfalls of In-Memory</h3><p>Our teams tried a number of years ago to speed up integration tests by swapping in-memory providers, but we found a <em>number</em> of problems with this approach that led us to abandoning it altogether.</p><h4 id="you-must-write-a-real-integration-test-anyway">You MUST write a real integration test anyway</h4><p>First and foremost, an in-memory provider is a pale imitation for the real thing. Even with writing in-memory tests, we still absolutely wrote integration tests against a real database. Those unit tests with in-memory looked exactly like the integration tests, just with a different provider.</p><p>Which led us to wonder - if we were writing the tests twice, what was the value of having two tests?</p><p>You <em>could</em> write a single test codebase, and run it twice - one with in-memory and one with the real thing, but that has other problems.</p><h4 id="you-must-allow-raw-access-to-the-underlying-data-api">You MUST allow raw access to the underlying data API</h4><p>ORMs allow you to encapsulate your data access, which is good, it allows us to be more productive by focusing on the business problem at hand. But it's also bad because it abstracts your data access, leaving developers to assume that they don't actually need to understand what is going on behind the scenes.</p><p>In our projects, we take a pragmatic approach - use the ORM's API when it works, and drop down to the database API when it becomes painful. ORMs these days make it quite easy, such as <a href="https://docs.microsoft.com/en-us/ef/core/querying/raw-sql">EF Core's raw SQL</a> capabilities:</p><pre><code class="language-csharp">var blogs = context.Blogs .FromSqlRaw("SELECT * FROM dbo.Blogs") .ToList();</code></pre><p>There are numerous limitations, many more than with EF6, which is why we often bring in a tool like Dapper to do complex SQL:</p><pre><code class="language-csharp">var employeeHierarchy = connection.Execute&lt;EmployeeDto&gt;(@"WITH cte_org AS ( SELECT staff_id, first_name, manager_id FROM sales.staffs WHERE manager_id IS NULL UNION ALL SELECT e.staff_id, e.first_name, e.manager_id FROM sales.staffs e INNER JOIN cte_org o ON o.staff_id = e.manager_id ) SELECT * FROM cte_org;");</code></pre><p>So how do we handle this scenario in our tests? Don't write the unit test (this assumes we're actually writing tests twice)? Somehow exclude it?</p><p>What I tend to find is that instead of dropping down to SQL, developers <em>avoid</em> SQL just so that it satisfies the tool. This is unacceptable.</p><h4 id="the-apis-don-t-match">The APIs don't match</h4><p>The in-memory API of vanilla <code>IQueryProvider</code> doesn't match the LINQ query provider. This means you'll have methods that don't make sense, are no-ops, or even nonsensical for in-memory.</p><p>The most obvious example is <code>Include</code>, which instructs the LINQ provider to basically do a join to eagerly fetch some child records. This is to avoid multiple round trips. However, this means nothing to in-memory. You can keep it, remove it, add more, remove more, doesn't matter.</p><p>It gets worse on the flip side - when LINQ provides some APIs that aren't supported by the query provider. Since LINQ can run in-memory, it can execute <em>anything</em> on the client side. But when you try to run <em>anything</em> on the server, that won't work:</p><pre><code class="language-csharp">var blogs = context.Blogs .Where(blog =&gt; StandardizeUrl(blog.Url).Contains("dotnet")) .ToList();</code></pre><p>Instead, LINQ providers allow a narrow subset of methods, and even beyond that, a limited set of core .NET methods to translate on the server. But not all obvious methods, and not even all overloads are supported. You don't know this until you actually run the LINQ query against the enormous LINQ provider.</p><h4 id="databases-means-transactions">Databases Means Transactions</h4><p>If I look at a typical integration test we write, we're using both the public and non-public API in a series of transactions to interact with the system under test. Here's a typical example:</p><pre><code class="language-csharp">[Fact] public async Task Should_edit_department() { var adminId = await SendAsync(new CreateEdit.Command { FirstMidName = "George", LastName = "Costanza", HireDate = DateTime.Today }); var admin2Id = await SendAsync(new CreateEdit.Command { FirstMidName = "George", LastName = "Costanza", HireDate = DateTime.Today }); var dept = new Department { Name = "History", InstructorId = adminId, Budget = 123m, StartDate = DateTime.Today }; await InsertAsync(dept); Edit.Command command = null; await ExecuteDbContextAsync(async (ctxt, mediator) =&gt; { var admin2 = await FindAsync&lt;Instructor&gt;(admin2Id); command = new Edit.Command { Id = dept.Id, Name = "English", Administrator = admin2, StartDate = DateTime.Today.AddDays(-1), Budget = 456m }; await mediator.Send(command); }); var result = await ExecuteDbContextAsync(db =&gt; db.Departments.Where(d =&gt; d.Id == dept.Id).Include(d =&gt; d.Administrator).SingleOrDefaultAsync()); result.Name.ShouldBe(command.Name); result.Administrator.Id.ShouldBe(command.Administrator.Id); result.StartDate.ShouldBe(command.StartDate.GetValueOrDefault()); result.Budget.ShouldBe(command.Budget.GetValueOrDefault()); }</code></pre><p>It's long, but it combines both public APIs (sending commands to create items) and non-public APIs (interacting directly with the <code>DbContext</code> to insert rows), executing an individual command for the test, then finally querying to pull an item out.</p><p>In integration tests of long ago, we'd put this entire set of operations in a transaction/unit of work. That's not at all how the application behaves, however, and we'd see many false positives that would only break when each operation was distinct. This is because ORMs use patterns like Unit of Work and Identity Map to determine what to persist and when.</p><p>With in-memory providers, there is no "ACID", everything is immediately durable. Each operation is immediately performed, and <a href="https://github.com/dotnet/efcore/blob/master/src/EFCore.InMemory/Storage/Internal/InMemoryTransaction.cs">transactions do nothing</a>! It might seem like a trivial thing, who cares if everything is always immediately durable? The problem is, just like an integration test that uses a single transaction, is that real-life behavior is much different and more complex, and will break in ways you can't predict. Enough false positives, and you wind up distrusting these unit tests.</p><p>The database enforces constraints and visibility and isolation levels that these attempts can't, and inevitably, you'll hit problems</p><h3 id="but-it-s-working-for-me-">But it's working for me!</h3><p>Great! You're one of the lucky few. Your usage is trivial enough that can fit into the constraints of an in-memory provider. We've tried this (and other in-memory DBs, like SQLite), and it's always failed.</p><p>Unfortunately for the EF team, maintaining this provider <em>for public consumption</em> is a cost for them, and a tradeoff. They're coding that instead of coding something else. The question becomes - is the value of a (always flawed) in-memory provider worth the effort for the team?</p><p>For me, no, it's not worth negative effects for our team.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=EZE_YP3T9Ac:w-EBMYGJ0m8:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=EZE_YP3T9Ac:w-EBMYGJ0m8:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=EZE_YP3T9Ac:w-EBMYGJ0m8:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=EZE_YP3T9Ac:w-EBMYGJ0m8:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=EZE_YP3T9Ac:w-EBMYGJ0m8:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/EZE_YP3T9Ac" height="1" width="1" alt=""/> Document-Level Pessimistic Concurrency in MongoDB https://jimmybogard.com/document-level-pessimistic-concurrency-in-mongodb-now-with-intent-locks/ Jimmy Bogard urn:uuid:7e7da086-59c8-6c92-746b-156d3d688013 Mon, 16 Mar 2020 12:28:46 +0000 <!--kg-card-begin: markdown--><p>In the <a href="https://jimmybogard.com/document-level-optimistic-concurrency-in-mongodb/">last post</a>, I got quite a few comments on some other ways to approach OCC. One pointed out that I wanted to explore was using a &quot;<a href="https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactions">SELECT...FOR UPDATE</a>&quot;, which basically will grant an intent lock on the document in question. With an intent lock, we</p> <!--kg-card-begin: markdown--><p>In the <a href="https://jimmybogard.com/document-level-optimistic-concurrency-in-mongodb/">last post</a>, I got quite a few comments on some other ways to approach OCC. One pointed out that I wanted to explore was using a &quot;<a href="https://www.mongodb.com/blog/post/how-to-select--for-update-inside-mongodb-transactions">SELECT...FOR UPDATE</a>&quot;, which basically will grant an intent lock on the document in question. With an intent lock, we transition from optimistic concurrency, where we read a document and assume that others won't modify it, but check anyway, to pessimistic concurrency, where we <em>know</em> that our document will be modified by others.</p> <p>Document-level locks in MongoDB can be granted in single operations or as part of a transaction. Intent locks can be granted as part of a write operation, or at the start of a transaction. Both are slightly different in their approach, one requiring an explicit write, and the other for the entire transaction, so let's look at each approach.</p> <h3 id="selectforupdate">SELECT ... FOR UPDATE</h3> <p>In this approach, we want to trigger an intent exclusive lock (IX) on our first read of the document when we pull it out. The link above describes an approach with the bare API, so we can translate this into the C# version.</p> <p>The general idea here is that MongoDb itself doesn't support this sort of &quot;intentional&quot; locking, but we can trigger it by updating some field to some new value. If another process has already locked that document, we'll immediately error out. In my case, we can just invent some field, I called it &quot;ETag&quot; to mirror Cosmos DB:</p> <pre><code class="language-csharp">public class Counter { public Guid Id { get; set; } public int Version { get; set; } public Guid ETag { get; set; } public int Value { get; set; } } </code></pre> <p>When we first load up the document, we don't just load - we find-and-update that <code>ETag</code> property to some new value, BUT, inside a transaction:</p> <pre><code class="language-csharp">using var session = await client.StartSessionAsync().ConfigureAwait(false); var transactionOptions = new TransactionOptions(readConcern: ReadConcern.Local, writeConcern: WriteConcern.W1); session.StartTransaction(transactionOptions); try { var update = Builders&lt;Counter&gt;.Update.Set(c =&gt; c.ETag, Guid.NewGuid()); var loaded = await collection.FindOneAndUpdateAsync(session, c =&gt; c.Id == id, update); loaded.Value++; result = await collection.ReplaceOneAsync(session, c =&gt; c.Id == id, loaded, new UpdateOptions { IsUpsert = false }); await session.CommitTransactionAsync(); } catch { await session.AbortTransactionAsync(); throw; } return result; </code></pre> <p>In that first write operation, or trivial change locks the document for the duration of the transaction. After we perform whatever operation we need to on the document, we write back as normal - no version checking.</p> <p>When we do this, our throughput drops fairly drastically - since we're locking ahead of time, we disallow any other concurrent write operations. Another concurrent write at the beginning of the operation will simply error out.</p> <p>In practice, we'd likely want to have some sort of transparent retry mechanism around our operation - especially in scenarios where we're likely to see write collisions. We'd also likely want to introduce some jitter or randomness in our delays, since two operations retrying at the same time will likely collide again.</p> <p>It can get fairly complicated, which is why you'd only want to introduce this in scenarios where optimistic locking isn't appropriate or viable. In fact, you can see an example of this in action in the NServiceBus MongoDB storage library - it's designed for concurrent operations, and <a href="https://github.com/Particular/NServiceBus.Storage.MongoDB/blob/master/src/NServiceBus.Storage.MongoDB/SynchronizedStorage/StorageSession.cs">uses pessimistic locking to do so</a>.</p> <h3 id="wrappingitup">Wrapping it up</h3> <p>With optimistic and pessimistic locking solutions possible, we see the flexibility of the MongoDB API. However, I do wish that it were <em>easier</em> and <em>more explicit</em> to call out locking strategies. This might be possible with other APIs on top of the MongoDB client, but as others have pointed out, you're far less likely to &quot;foot-gun&quot; yourself with a client API that encapsulates all of this for us.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=a2aZHLzkqBY:Kf7HMZ7vzlk:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=a2aZHLzkqBY:Kf7HMZ7vzlk:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=a2aZHLzkqBY:Kf7HMZ7vzlk:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=a2aZHLzkqBY:Kf7HMZ7vzlk:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=a2aZHLzkqBY:Kf7HMZ7vzlk:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/a2aZHLzkqBY" height="1" width="1" alt=""/> Immutability in DTOs? https://jimmybogard.com/immutability-in-dtos/ Jimmy Bogard urn:uuid:d2544b34-c634-e965-f858-5bccce9ba9f8 Thu, 27 Feb 2020 14:11:00 +0000 <p>Something that comes up every so often is the question of whether or not <a href="https://en.wikipedia.org/wiki/Data_transfer_object">Data Transfer Objects</a> should be immutable - that is, should our design of the classes and types of a DTO enforce immutability.</p><p>To answer this question, we first need to look at what purpose a DTO</p> <p>Something that comes up every so often is the question of whether or not <a href="https://en.wikipedia.org/wiki/Data_transfer_object">Data Transfer Objects</a> should be immutable - that is, should our design of the classes and types of a DTO enforce immutability.</p><p>To answer this question, we first need to look at what purpose a DTO serves. As the name explicitly calls out, it is an object that is used to carry data between processes. In practice, we don't literally transport an object back and forth between processes, and instead there is some form of serialization.</p><p>Where the waters get muddied is when types are used to describe message contracts - whether in asynchronous communication in the form of messaging or APIs. Given our usage of DTOs these days, why might we want to enforce immutability?</p><h3 id="benefits-of-immutability">Benefits of immutability</h3><p>The primary benefit of immutability is, well, you can't change the object! That makes a lot of sense on the <em>receiving</em> side of a communication - I shouldn't be able to change the message, even it's a deserialized version of that message.</p><p>There are <em>some</em> cases where I might want to be able to change a message as it flows through the system. For example, with a <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/DocumentMessage.html">Document Message</a>, I often pass the same message between many endpoints, and each endpoint processes the message and adds its own information as it goes along, sometimes with an attached <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/RoutingTable.html">Routing Slip</a>.</p><p>This is rarer case, however, and what often winds up happening is I'm not mutating the original message, but the <strong>deserialized copy</strong> of the message, arriving in the form of the object.</p><p>By far the majority of cases are the receiver <strong>should not modify</strong> the message upon receipt. Sounds great! So why don't we actually do this?</p><h3 id="immutability-in-a-mutable-world">Immutability in a mutable world</h3><p>The main issue with immutability is that it depends on your language, platform, and even tooling, to support this.</p><p>In my systems, the default message content type is <code>application/json</code>, and the senders/receivers are .NET/C# or JavaScript/TypeScript. The main hurdle is that neither of these languages, or the platforms and tooling, support immutability out-of-the-box.</p><p>In order to declare an <code>Order</code> message as immutable, it's fairly complex. You have to make sure all write paths are initialized on construction:</p><pre><code class="language-csharp">public class Order { public Order(Customer customer, List&lt;LineItem&gt; lineItems) { Customer = customer; LineItems = new ReadOnlyCollection&lt;LineItem&gt;(lineItems); } public Customer Customer { get; } public IReadOnlyCollection&lt;LineItem&gt; LineItems { get; } } public class Customer { public Customer (string firstName, string lastName) { // my fingers are tired } public string FirstName { get; } public string LastName { get; } // etc } public class LineItem { public class LineItem(int quantity, decimal total, Product product) { // mommy are we there yet } public decimal Total { get; } public int Quantity { get; } public Product Product { get; } }</code></pre><p>Because C# doesn't have first-class support for an "immutable" type (like an <a href="https://en.wikipedia.org/wiki/Record_(computer_science)">F# Record Type</a>), it takes a LOT of typing to build out these structures.</p><p>When you're done, you'll find that <em>constructing</em> these types is a huge pain. You'll get very long construction declarations:</p><pre><code class="language-csharp">var order = new Order( new Customer( "Bob", Saget"), new { new LineItem(10, 100m, new Product( ) } ); </code></pre><p>Gross! How do we know what value corresponds to which property? We can add named arguments to make it readable, but those are optional, and still don't correspond to the actual property names (all camelCase). Typically you see no parameter names, making it quite difficult to understand how this all fits together.</p><p>Contrast this with a normal <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/object-and-collection-initializers">C# Object Initializer</a> statement:</p><pre><code class="language-csharp">var order = new Order { Customer = new Customer { FirstName = "Bob", LastName = "Saget" }, LineItems = new { new LineItem { Quantity = 10, Total = 100m, Product = new Product { } } } };</code></pre><p>I can see exactly how the entire object is built, one member at a time.</p><p>One other major issue with immutability in C# is the support for serializers and deserializers to work is quite a pain. If these tools see something without a setter, that's a problem. If we don't have a no-arg constructor, that's a problem. So we wind up having to bend the tools to be able to handle our types, and very often, with lots of compromises (private, unused setters etc.).</p><p>Finally, the nail in the coffin for me, is that when we introduce immutability via constructors, this introduces a breakable contract - a method with fixed arguments. If we add a value to our message, we introduce the possibility that receivers won't be able to properly deserialize.</p><p>Given all this, I find it's generally a net negative to attempt immutable DTOs. If the language, framework, and tooling supported it, that would be a different story.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CutbsWkRimg:Zv_bdb06k1U:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CutbsWkRimg:Zv_bdb06k1U:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=CutbsWkRimg:Zv_bdb06k1U:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CutbsWkRimg:Zv_bdb06k1U:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=CutbsWkRimg:Zv_bdb06k1U:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/CutbsWkRimg" height="1" width="1" alt=""/> MediatR 8.0 Released https://jimmybogard.com/mediatr-8-0-released/ Jimmy Bogard urn:uuid:64a2c071-3bf0-9814-a415-58633bf01aa5 Tue, 31 Dec 2019 14:54:19 +0000 <!--kg-card-begin: markdown--><p>This release brings some (minor) breaking changes to the public API. First, we added a non-generic overload to <code>Send</code> on <code>IMediator</code>:</p> <pre><code class="language-diff">public interface IMediator { Task&lt;TResponse&gt; Send&lt;TResponse&gt;(IRequest&lt;TResponse&gt; request, CancellationToken cancellationToken = default); + Task&lt;object&gt; Send(object request, CancellationToken cancellationToken = default)</code></pre> <!--kg-card-begin: markdown--><p>This release brings some (minor) breaking changes to the public API. First, we added a non-generic overload to <code>Send</code> on <code>IMediator</code>:</p> <pre><code class="language-diff">public interface IMediator { Task&lt;TResponse&gt; Send&lt;TResponse&gt;(IRequest&lt;TResponse&gt; request, CancellationToken cancellationToken = default); + Task&lt;object&gt; Send(object request, CancellationToken cancellationToken = default); Task Publish(object notification, CancellationToken cancellationToken = default); Task Publish&lt;TNotification&gt;(TNotification notification, CancellationToken cancellationToken = default) where TNotification : INotification; } </code></pre> <p>Second, we've modified the <code>Mediator</code> class to include the <code>CancellationToken</code> for the virtual <code>PublishCore</code> method that allows implementors to override the <a href="https://github.com/jbogard/mediatr/wiki#publish-strategies">publishing strategy</a> and accept the notification itself:</p> <pre><code>- protected virtual async Task PublishCore(IEnumerable&lt;Func&lt;Task&gt;&gt; allHandlers) + protected virtual async Task PublishCore(IEnumerable&lt;Func&lt;INotification, CancellationToken, Task&gt;&gt; allHandlers, INotification notification, CancellationToken cancellationToken) { foreach (var handler in allHandlers) { - await handler().ConfigureAwait(false); + await handler(notification, cancellationToken).ConfigureAwait(false); } } </code></pre> <p>Finally, a new feature! We've added some new built-in pipeline behaviors for <a href="https://github.com/jbogard/mediatr/wiki#exceptions-handling">handling exceptions</a>. You can now handle specific exceptions (similar to an <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/controllers/filters?view=aspnetcore-3.1#exception-filters">Exception Filter</a> in ASP.NET Core):</p> <pre><code class="language-c#">public interface IRequestExceptionHandler&lt;in TRequest, TResponse, TException&gt; where TException : Exception { Task Handle(TRequest request, TException exception, RequestExceptionHandlerState&lt;TResponse&gt; state, CancellationToken cancellationToken); } </code></pre> <p>Alternatively, if you don't want to provide alternate responses based on exceptions, you can provide exception actions:</p> <pre><code class="language-c#">public interface IRequestExceptionAction&lt;in TRequest, in TException&gt; where TException : Exception { Task Execute(TRequest request, TException exception, CancellationToken cancellationToken); } </code></pre> <p>These are a bit simpler if you want to just provide some exception logging. I've also updated the <a href="https://www.nuget.org/packages/MediatR.Extensions.Microsoft.DependencyInjection/">MS DI Extensions package</a>, which registers these new behaviors and interfaces. Enjoy!</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=eoI11zYQ7zQ:pXXaiNQzGQ0:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=eoI11zYQ7zQ:pXXaiNQzGQ0:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=eoI11zYQ7zQ:pXXaiNQzGQ0:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=eoI11zYQ7zQ:pXXaiNQzGQ0:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=eoI11zYQ7zQ:pXXaiNQzGQ0:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/eoI11zYQ7zQ" height="1" width="1" alt=""/> User Secrets in Docker-based .NET Core Worker Applications https://jimmybogard.com/user-secrets-in-docker-based-net-core-worker-applications/ Jimmy Bogard urn:uuid:ebdbc966-1716-66a1-50c4-a8df2cda2a91 Mon, 16 Dec 2019 16:53:16 +0000 <!--kg-card-begin: markdown--><p>As part of the recent <a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Message Endpoints in Azure series</a>, I wanted to check out the new .NET Core 3.0 Worker templates to see how the templates have improved the situation (actually, a lot), but there are still some things missing from the Worker SDK versus the Web SDK.</p> <!--kg-card-begin: markdown--><p>As part of the recent <a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Message Endpoints in Azure series</a>, I wanted to check out the new .NET Core 3.0 Worker templates to see how the templates have improved the situation (actually, a lot), but there are still some things missing from the Worker SDK versus the Web SDK.</p> <p><a href="https://docs.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&amp;tabs=visual-studio#worker-service-template">.NET Core Workers</a>, introduced initially as background services in .NET Core 2.x, are now a top-level <code>dotnet</code> and Visual Studio template:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/11/2019/Annotation%202019-12-16%20103033.png" alt></p> <p>When you create the worker template, you're also given the option to enable Docker support:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/11/2019/Annotation%202019-12-16%20103151.png" alt></p> <p>In that series, I connect to my instance of Azure Service Bus in the cloud. However, I don't want to commit the connection string to source control, so I use <a href="https://docs.microsoft.com/en-us/aspnet/core/security/app-secrets?view=aspnetcore-3.1&amp;tabs=windows">User Secrets</a> to store the connection string locally. Out-of-the-box, this SDK will include a user secrets ID in the project file, however, if you actually try to <em>use</em> the secrets when running, it won't work!</p> <p>The first reason is because there was an issue in the Worker MSBuild task pipeline that looked for either the Web SDK or an explicit reference to the User Secrets NuGet package to decide to include the User Secrets in the configuration sources - but <a href="https://github.com/aspnet/Extensions/issues/2743">that's been fixed</a> (<a href="https://github.com/aspnet/Extensions/issues/2743#issuecomment-562264614">workaround here</a>).</p> <p>However, it still doesn't work if you run a worker in a Docker container inside Visual Studio. The reason here is because when you run in a Docker container, the Visual Studio tooling does a lot of work to make running the container easier - including <a href="https://docs.microsoft.com/en-us/visualstudio/containers/container-build?view=vs-2019#volume-mapping">mapping a bunch of volumes</a>. On that list is User Secrets - <em>however</em> - user secrets are only mapped for Web SDK!</p> <p>Running our worker app inside a Docker container during development means we don't have any user secrets - and the solutions aren't great. Environment variables, Docker secrets, all are more annoying than just using the original secrets.</p> <h3 id="mappingthevolume">Mapping the volume</h3> <p>Instead of mucking around with custom solutions, ideally we can just map the user secrets volume ourselves. Luckily, there's an easy way to do so - we can specify custom Docker command line arguments:</p> <pre><code class="language-xml">&lt;PropertyGroup&gt; &lt;TargetFramework&gt;netcoreapp3.1&lt;/TargetFramework&gt; &lt;UserSecretsId&gt;89595f67-a846-41e5-a74e-f876488ea8be&lt;/UserSecretsId&gt; &lt;DockerDefaultTargetOS&gt;Linux&lt;/DockerDefaultTargetOS&gt; &lt;DockerfileRunArguments&gt;-v &quot;$(AppData)/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro&quot;&lt;/DockerfileRunArguments&gt; &lt;/PropertyGroup&gt; </code></pre> <p>The <code>DockerfileRunArguments</code> element we use to pass through our additional volume mapping, which you can verify with a normal Web SDK running under Docker to see what <em>it</em> passes through.</p> <p>I've <a href="https://github.com/microsoft/DockerTools/issues/223">opened a GitHub issue</a> with the SDK tools, but in the meantime, we can use this workaround. With this in place, our Worker application can now use User Secrets whether it's running on our host or in a container.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=e-QYzFpAQdA:66EAg1xhHxI:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=e-QYzFpAQdA:66EAg1xhHxI:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=e-QYzFpAQdA:66EAg1xhHxI:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=e-QYzFpAQdA:66EAg1xhHxI:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=e-QYzFpAQdA:66EAg1xhHxI:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/e-QYzFpAQdA" height="1" width="1" alt=""/> Contoso University Vertical Slice App Updated to ASP.NET Core 3.0 https://jimmybogard.com/contoso-university-vertical-slice-app-updated-to-asp-net-core-3-0/ Jimmy Bogard urn:uuid:c4b4b33f-dc3c-4f29-483d-d7e61c72a891 Mon, 18 Nov 2019 09:15:01 +0000 <!--kg-card-begin: markdown--><p>To keep a running example of &quot;how we do web apps&quot;, I've updated my <a href="https://github.com/jbogard/contosoUniversityDotNetCore-Pages">Contoso University example app to ASP.NET Core 3.0</a>. This sample app is just a re-jiggering of <a href="https://docs.microsoft.com/en-us/aspnet/core/data/ef-rp/intro?view=aspnetcore-3.0&amp;tabs=visual-studio">Microsoft's Contoso University Razor Pages sample app </a>. It shows how we (<a href="https://headspring.com/">Headspring</a>) typically use:</p> <ul> <li>CQRS w/</li></ul> <!--kg-card-begin: markdown--><p>To keep a running example of &quot;how we do web apps&quot;, I've updated my <a href="https://github.com/jbogard/contosoUniversityDotNetCore-Pages">Contoso University example app to ASP.NET Core 3.0</a>. This sample app is just a re-jiggering of <a href="https://docs.microsoft.com/en-us/aspnet/core/data/ef-rp/intro?view=aspnetcore-3.0&amp;tabs=visual-studio">Microsoft's Contoso University Razor Pages sample app </a>. It shows how we (<a href="https://headspring.com/">Headspring</a>) typically use:</p> <ul> <li>CQRS w/ MediatR</li> <li>Razor Pages models w/ AutoMapper</li> <li>Validation w/ Fluent Validation</li> <li>Conventional HTML w/ HtmlTags</li> <li>Database migrations w/ RoundhousE</li> <li>Integration testing w/ xUnit</li> <li>Vertical Slice Architecture</li> </ul> <p>The original application didn't really have much/any behavior to speak of in the EF models, so there's not any unit tests, just integration tests. If the app was more than CRUD, we'd refactor handler behavior down to the domain model.</p> <p>The build script is just pure PowerShell, but in our typical systems, we'd use an actual task-based script runner, like psake or FAKE. Otherwise, it's pretty close to our &quot;normal&quot; stack and usage.</p> <p>Updating the app to ASP.NET Core 3.0 was quite straightforward, the most I had to do was update some of the models and database to match the updated sample, and convert the services configuration to use the simplified &quot;<code>AddRazorPages</code>&quot; syntax.</p> <p>Enjoy!</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Ea6CkvXWIQ0:FysZu_C9q_o:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Ea6CkvXWIQ0:FysZu_C9q_o:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Ea6CkvXWIQ0:FysZu_C9q_o:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Ea6CkvXWIQ0:FysZu_C9q_o:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Ea6CkvXWIQ0:FysZu_C9q_o:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/Ea6CkvXWIQ0" height="1" width="1" alt=""/> Document-Level Optimistic Concurrency in MongoDB https://jimmybogard.com/document-level-optimistic-concurrency-in-mongodb/ Jimmy Bogard urn:uuid:f9ca450f-32a1-3619-c3c4-625e9ca7a6ff Wed, 30 Oct 2019 19:42:58 +0000 <!--kg-card-begin: markdown--><p>I've had a number of projects now that have used MongoDB, and each time, I've needed to dig deep into the transaction support. But in addition to transaction support, I needed to understand the <a href="https://docs.mongodb.com/manual/faq/concurrency/">concurrency and locking models of Mongo</a>. Unlike many other NoSQL databases, Mongo has locks at the</p> <!--kg-card-begin: markdown--><p>I've had a number of projects now that have used MongoDB, and each time, I've needed to dig deep into the transaction support. But in addition to transaction support, I needed to understand the <a href="https://docs.mongodb.com/manual/faq/concurrency/">concurrency and locking models of Mongo</a>. Unlike many other NoSQL databases, Mongo has locks at the global, database, or collection level, but <em>not</em> at the document level (or row-level, like SQL).</p> <p>If two processes read a document, then update it, as long as those updates don't collide they'll both succeed, and we'll potentially lose data.</p> <p>Mongo has had a lot of work done to strengthen its distributed story (<a href="https://jepsen.io/analyses/mongodb-3-6-4">see the Jepsen analysis</a>), it has no built-in support for optimistic concurrency control at a document level. With SQL Server, you can use <a href="https://en.wikipedia.org/wiki/Snapshot_isolation">Snapshot Isolation</a> to guarantee no other process has modified data since you read. With Cosmos DB, <a href="https://docs.microsoft.com/en-us/azure/cosmos-db/database-transactions-optimistic-concurrency#optimistic-concurrency-control">an etag value</a> is used to check the version of the document being written vs. what exists.</p> <p>Luckily, it's fairly straightforward to implement document-level <a href="https://en.wikipedia.org/wiki/Optimistic_concurrency_control">optimistic concurrency control</a>. But first, let's prove that without OCC, we can have bad writes.</p> <h3 id="withoutocc">Without OCC</h3> <p>To start, I'm going to create a very simple document, just an identifier and a counter:</p> <pre><code class="language-c#">public class Counter { public Guid Id { get; set; } public int Value { get; set; } } </code></pre> <p>I'm just going to spin off some tasks to increment the value of the counter, keeping track of the number of writes and final value:</p> <pre><code class="language-c#">var document = await collection.AsQueryable().Where(doc =&gt; doc.Id == id).SingleAsync(); Console.WriteLine($&quot;Before : {document.Value}&quot;); var tasks = Enumerable.Range(0, 100).Select(async i =&gt; { var loaded = await collection.AsQueryable().Where(doc =&gt; doc.Id == id).SingleAsync(); loaded.Value++; long result; do { result = (await collection.ReplaceOneAsync(c =&gt; c.Id == id, loaded, new UpdateOptions {IsUpsert = false})).ModifiedCount; } while (result != 1); return result; }).ToList(); var total = await Task.WhenAll(tasks); document = await collection.AsQueryable().Where(doc =&gt; doc.Id == id).SingleAsync(); Console.WriteLine($&quot;After : {document.Value}&quot;); Console.WriteLine($&quot;Modified: {total.Sum(r =&gt; r)}&quot;); </code></pre> <p>Each pass, I load up the document and increment the value by 1. However, I can have multiple tasks executing at once, so two tasks might read the same value, but only increment by 1. To force a dual write, I continue to update until the collection lock is released. In a real-world scenario, there would be delays between reads/writes that would introduce this issue.</p> <p>When I run this, I should see an initial value of 0, a final value of 100, and a modified count of 100. But I don't, because some value overwrote each other:</p> <pre><code>Before : 1 After : 92 Modified: 100 </code></pre> <p>I modified 100 times, but the counter only made it up to 92! Let's add some optimistic concurrency to improve things.</p> <h3 id="withocc">With OCC</h3> <p>Some implementations of OCC use a timestamp, but that often isn't precise enough, so instead I'm using a monotonic counter as my version. It starts at zero and goes up from there:</p> <pre><code class="language-c#">public class Counter { public Guid Id { get; set; } public int Version { get; set; } public int Value { get; set; } } </code></pre> <p>Version design is a bit more complex, I'm just keeping things simple but we can get as complicated as we want. Now when I update, I'm going to make sure that I both increment my counter <em>and</em> version, and when I send the update, I'll include an additional clause against the originally read version:</p> <pre><code class="language-c#">var tasks = Enumerable.Range(0, 100).Select(async i =&gt; { var loaded = await collection.AsQueryable().Where(doc =&gt; doc.Id == id).SingleAsync(); var version = loaded.Version; loaded.Value++; loaded.Version++; var result = await collection.ReplaceOneAsync( c =&gt; c.Id == id &amp;&amp; c.Version == version, loaded, new UpdateOptions { IsUpsert = false }); return result; }).ToList(); </code></pre> <p>I removed the &quot;retry&quot; to make sure that I don't get any overwrites, and with this in place, the final values line up:</p> <pre><code>Before : 0 After : 92 Modified: 92 </code></pre> <p>However, if I really wanted to make sure that I actually get all of those updates in, I'd need to retry the entire operation:</p> <pre><code class="language-c#">var tasks = Enumerable.Range(0, 100).Select(async i =&gt; { ReplaceOneResult result; do { var loaded = await collection.AsQueryable() .Where(doc =&gt; doc.Id == id) .SingleAsync(); var version = loaded.Version; loaded.Value++; loaded.Version++; result = await collection.ReplaceOneAsync( c =&gt; c.Id == id &amp;&amp; c.Version == version, loaded, new UpdateOptions {IsUpsert = false}); } while (result.ModifiedCount != 1); return result; }).ToList(); </code></pre> <p>With a simple retry in place, I make sure I reload the document in question, get a refreshed version, and now all my numbers add up:</p> <pre><code>Before : 0 After : 100 Modified: 100 </code></pre> <p>Exactly what I was looking for!</p> <h3 id="ageneralsolution">A general solution</h3> <p>This works just fine if I &quot;remember&quot; to include that <code>Where</code> clause correctly, but there's a better way if we want a general solution. For that, I'd do pretty much what I would have in the <a href="https://jimmybogard.com/life-beyond-distributed-transactions-an-apostates-implementation-dispatching-example/">Life Beyond Distributed Transactions</a> series - introduce a <a href="https://martinfowler.com/eaaCatalog/repository.html">Repository</a>, <a href="https://martinfowler.com/eaaCatalog/unitOfWork.html">Unit of Work</a>, and <a href="https://martinfowler.com/eaaCatalog/identityMap.html">Identity Map</a>. These can all start very simple, but I can encapsulate all of the version checking/managing in these objects instead of forcing all users to remember to include that <code>Where</code> clause.</p> <p>If there's even a remote possibility that you'll see concurrent updates, you'll likely go down the path of optimistic concurrency. Luckily, a basic solution isn't that much code.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=W4TPWxDBe2s:VjXZrOw1Edo:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=W4TPWxDBe2s:VjXZrOw1Edo:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=W4TPWxDBe2s:VjXZrOw1Edo:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=W4TPWxDBe2s:VjXZrOw1Edo:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=W4TPWxDBe2s:VjXZrOw1Edo:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/W4TPWxDBe2s" height="1" width="1" alt=""/> Building Messaging Endpoints in Azure: Functions https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/ Jimmy Bogard urn:uuid:66af48f6-2120-506c-15f9-43c21cf667af Thu, 24 Oct 2019 21:04:04 +0000 <!--kg-card-begin: markdown--><p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Evaluating the Landscape</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">A Generic Host</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/">Azure WebJobs</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/">Azure Container Instances</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/">Azure Functions</a></li> </ul> <p>In our last post, we looked at deploying <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageEndpoint.html">message endpoints</a> in containers, eventually out to Azure Container Instances. While fairly straightforward, this approach is fairly close to Infrastructure-as-a-Service. I can scale, but I</p> <!--kg-card-begin: markdown--><p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Evaluating the Landscape</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">A Generic Host</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/">Azure WebJobs</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/">Azure Container Instances</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/">Azure Functions</a></li> </ul> <p>In our last post, we looked at deploying <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageEndpoint.html">message endpoints</a> in containers, eventually out to Azure Container Instances. While fairly straightforward, this approach is fairly close to Infrastructure-as-a-Service. I can scale, but I can't auto-scale, and even if I used Kubernetes, I can't scale based on exceeding my lead time SLA (time from when a message enters the queue until when it is consumed).</p> <p>But what about the serverless option of Azure, <a href="https://azure.microsoft.com/en-us/services/functions/">Azure Functions</a>? Can this offer me a better experience for building a message endpoint?</p> <p>So far, the answer is &quot;not really&quot;, but it will highly depend on your workload or needs. The programming and hosting model is vastly different than containers or web jobs, so we first need to understand how our function will get triggered.</p> <h3 id="choosingatrigger">Choosing a trigger</h3> <p>Something needs to kick off our endpoint, and for this, we have a couple of choices. The most basic choice is the <a href="https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus">Azure Service Bus binding for Azure Functions</a>. I mention &quot;basic&quot; - because it is. You're not really building an endpoint here, but a single function. It may seem like a small difference, but it's really not. When I'm building endpoints, the handler is just one piece of the puzzle - I have the host configuration, logging, tracing, error handling, and beyond that, complex messaging patterns.</p> <p>The other option is a pre-release version of the <a href="https://docs.particular.net/samples/azure/functions/service-bus/">NServiceBus support for Azure Functions</a>, which dramatically alters the development model of functions itself and you're back to developing message handlers (instead of merely functions).</p> <p>First, let's look at the out-of-the-box binding.</p> <h3 id="azureservicebusbinding">Azure Service Bus binding</h3> <p>With Azure Functions, with their own special SDK package, you're creating a &quot;Functions&quot; project and choosing the &quot;Azure Service Bus&quot; trigger. All this really does behind the scenes is create a project with the correct NuGet package references:</p> <pre><code class="language-xml">&lt;ItemGroup&gt; &lt;PackageReference Include=&quot;Microsoft.Azure.WebJobs.Extensions.ServiceBus&quot; Version=&quot;3.0.6&quot; /&gt; &lt;PackageReference Include=&quot;Microsoft.NET.Sdk.Functions&quot; Version=&quot;1.0.29&quot; /&gt; &lt;/ItemGroup&gt; </code></pre> <p>Yes it's a little odd - you're adding a package for &quot;WebJobs&quot; but this is a Functions application. That's because the WebJobs triggers and Functions triggers share the same infrastructure, but with a different hosting/deployment model.</p> <p>In any case, you can now create a function with Many Attributes. Here's one for a simple request/response:</p> <pre><code class="language-c#">public static class SayFunctionSomethingHandler { [FunctionName(&quot;SayFunctionSomethingHandler&quot;)] [return: ServiceBus(&quot;NsbAzureHosting.Sender&quot;, Connection = &quot;AzureServiceBus&quot;)] public static SayFunctionSomethingResponse Run( [ServiceBusTrigger(&quot;NsbAzureHosting.FunctionReceiver&quot;, Connection = &quot;AzureServiceBus&quot;)] SayFunctionSomethingCommand command, ILogger log) { log.LogInformation($&quot;C# ServiceBus topic trigger function processed message: {command.Message}&quot;); return new SayFunctionSomethingResponse { Message = command.Message + &quot; back at ya!&quot;}; } } </code></pre> <p>We have to declare the function name, as an attribute, the trigger, as an attribute, and the return value also as an attribute. Service Bus client takes care of deserializing my message from JSON (assuming the content type of the message was <code>application/json</code>).</p> <p>Functions (or the Azure Service Bus client) don't understand the concept of &quot;request/response&quot; or &quot;pub/sub&quot; for that matter, so it's up to you to build these concepts on top. If you want to subscribe to an event, you need to set up the <a href="https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-queues-topics-subscriptions#topics-and-subscriptions">topic and subscription</a> inside of the broker.</p> <p>There's no support for <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html">request/response</a>, <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/ReturnAddress.html">return addresses</a>, or <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/CorrelationIdentifier.html">correlated replies</a>. For us, if we want to &quot;reply&quot; back to the receiver, we'd need to create a client, pick off some reply address from the headers, and generate and send the message.</p> <p>In the above example, I've hardcoded the reply queue, <code>NsbAzureHosting.Sender</code>, so it's not even following the correct message pattern. With request/reply, the receiver should be ignorant of the sender, just as modern email clients are.</p> <p>So while all this <em>works</em>, you don't get the more advanced features of a full message endpoint and all the patterns of the Enterprise Integration Patterns book. You have to roll a lot yourself.</p> <p>You also get only very primitive retry capabilities - retries are immediate and once exhausted the message goes to the <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/DeadLetterChannel.html">dead-letter queue</a>. With NServiceBus, we get immediate <em>and</em> delayed retries - very helpful when we're using some external resource whose downtime won't get resolved within milliseconds.</p> <p>With all this in mind, let's look at the NServiceBus function support.</p> <h3 id="nservicebusbindings">NServiceBus bindings</h3> <p>I won't rehash the entire <a href="https://docs.particular.net/samples/azure/functions/service-bus/">sample</a>, but there are some key differences in this setup than a &quot;normal&quot; functions setup. Azure Functions doesn't have a lot of the extensibility support that NServiceBus, so it won't give you things like Outbox, deferred messages, idempotent receivers, sagas, and so on. So NServiceBus gets around this by still hosting an &quot;endpoint&quot;, and delegating the message handling to the endpoint from inside your function:</p> <pre><code class="language-c#">[FunctionName(EndpointName)] public static async Task Run( [ServiceBusTrigger(queueName: EndpointName)] Message message, ILogger logger, ExecutionContext executionContext) { await endpoint.Process(message, executionContext, logger); } </code></pre> <p>The endpoint is what processes the message inside a full execution pipeline, so you can focus on building out full message handlers, instead of just functions:</p> <pre><code class="language-c#">public class TriggerMessageHandler : IHandleMessages&lt;TriggerMessage&gt; { private static readonly ILog Log = LogManager.GetLogger&lt;TriggerMessageHandler&gt;(); public Task Handle(TriggerMessage message, IMessageHandlerContext context) { Log.Warn($&quot;Handling {nameof(TriggerMessage)} in {nameof(TriggerMessageHandler)}&quot;); return context.SendLocal(new FollowupMessage()); } } </code></pre> <p>Now inside of our message handler, we get the full <code>IMessageHandlerContext</code>, and not just the <code>ExecutionContext</code> of a function, which is fairly limited. Now we can reply, publish, defer, set timeouts, kick off sagas, all inside the full-featured NServiceBus message endpoint.</p> <p>Neither of these options are <em>great</em>, but for very simple message handlers, an Azure Function can suffice. While an Azure Function isn't close to a &quot;PaaS Message Endpoint&quot;, it is close to a &quot;PaaS Message Handler&quot;, and that might be sufficient for your needs.</p> <p>In the last post, I'll look ahead to see how our situation may improve with <em>gulp</em> Kubernetes.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=A05L4mqnfNc:G0ypyBXmJkk:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=A05L4mqnfNc:G0ypyBXmJkk:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=A05L4mqnfNc:G0ypyBXmJkk:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=A05L4mqnfNc:G0ypyBXmJkk:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=A05L4mqnfNc:G0ypyBXmJkk:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/A05L4mqnfNc" height="1" width="1" alt=""/> Building Messaging Endpoints in Azure: Container Instances https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/ Jimmy Bogard urn:uuid:9a20482c-813d-f41d-7df5-750551b18d4e Tue, 01 Oct 2019 15:18:57 +0000 <!--kg-card-begin: markdown--><p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Evaluating the Landscape</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">A Generic Host</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/">Azure WebJobs</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/">Azure Container Instances</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/">Azure Functions</a></li> </ul> <p>In the last post, we looked at Azure WebJobs as a means of deploying <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageEndpoint.html">messaging endpoints</a>. And while that may work for smaller loads and simpler systems, as the number of message endpoints</p> <!--kg-card-begin: markdown--><p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Evaluating the Landscape</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">A Generic Host</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/">Azure WebJobs</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/">Azure Container Instances</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/">Azure Functions</a></li> </ul> <p>In the last post, we looked at Azure WebJobs as a means of deploying <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageEndpoint.html">messaging endpoints</a>. And while that may work for smaller loads and simpler systems, as the number of message endpoints grows, dealing with a &quot;sidecar&quot; in a WebJob starts to become untenable.</p> <p>Once we graduate from WebJobs, what's next? What can balance the ease of deployment of WebJobs with the flexibility to scale only the endpoint as needed?</p> <p>The closest we come to this is Azure Container Instances. Unlike Kubernetes in Azure Kubernetes Service (AKS), you don't have to manage a cluster yourself. This might change in the future, as Kubernetes becomes more widespread, but for now ACIs are a much simpler step than full on Kubernetes.</p> <p>With ACIs, I can decide how large each individual instance is, and how many instances to run. As load increases or decreases, I can (manually) spin up or down services. Initially, we might keep things simple and create relatively small instance sizes, and then provision larger ones as we need to.</p> <p>But first, we need a container!</p> <h3 id="deployingintoacontainer">Deploying Into a Container</h3> <p>From an application perspective, nothing much changes. In fact, we can use the exact same application from our WebJobs instance, <em>except</em>, we don't need to do the <code>ConfigureWebJobs</code> part. It's just a console application!</p> <p>Different from WebJobs is the instructions to &quot;run&quot; the endpoint. With WebJobs, we needed that <code>run.cmd</code> file. With a container, we'll need a <code>Dockerfile</code> to describe out to build and run our container instance:</p> <pre><code class="language-dockerfile">FROM microsoft/dotnet:2.2-runtime-alpine AS base FROM microsoft/dotnet:2.2-sdk-alpine AS build WORKDIR /src COPY . . WORKDIR /src/DockerReceiver RUN dotnet publish -c Release -o /app FROM base AS final WORKDIR /app COPY --from=build /app . ENTRYPOINT [&quot;dotnet&quot;, &quot;DockerReceiver.dll&quot;] </code></pre> <p>In .NET Core 3.0, this can be simplified to have a <a href="https://devblogs.microsoft.com/dotnet/announcing-net-core-3-0/">self-running executable</a> but with this still on 2.2, we have to define the entrypoint as <code>dotnet DockerReceiver.dll</code>.</p> <p>With this in place, I can test locally by using Docker directly, or Docker Compose. I went with Docker Compose since it's got out-of-the-box support in Visual Studio, and I can define environment variables more easily:</p> <pre><code class="language-yml">version: '3.4' services: dockerreceiver: image: nsbazurehosting-dockerreceiver build: context: . dockerfile: DockerReceiver/Dockerfile </code></pre> <p>Then in my local overrides:</p> <pre><code class="language-yml">version: '3.4' services: dockerreceiver: environment: - USER_SECRETS_ID=&lt;user secrets guid here&gt; volumes: - $APPDATA/Microsoft/UserSecrets/$USER_SECRETS_ID:/root/.microsoft/usersecrets/$USER_SECRETS_ID </code></pre> <p>With this in place, I can easily run my application inside or outside of a container. I don't really need to configure any networks or anything like that - the containers don't communicate with each other, only with Azure Service Bus (and ASB doesn't come in a containerized format for developers).</p> <p>I can run with Docker Compose locally, and send a message to make sure everything is connected:</p> <pre><code>dockerreceiver_1 | dockerreceiver_1 | info: DockerReceiver.SaySomethingAlsoHandler[0] dockerreceiver_1 | Received message: Hello World </code></pre> <p>Now that we have our image up and running, it's time to deploy it into Azure.</p> <h3 id="buildingandrunninginazure">Building and Running in Azure</h3> <p>We first need a place to <em>store</em> our container, and for that, we can use <a href="https://azure.microsoft.com/en-us/services/container-registry/">Azure Container Registry</a>, which will contain the built container images from our build process. We can do this from the Azure CLI:</p> <pre><code>az acr create --resource-group NsbAzureHosting --name nsbazurehosting --sku Basic </code></pre> <p>With the container registry up, we can add a step to our Azure Build Pipeline to build our container:</p> <pre><code class="language-yml">steps: - task: Docker@1 displayName: 'Build DockerReceiver' inputs: azureSubscriptionEndpoint: # subscription # azureContainerRegistry: nsbazurehosting.azurecr.io dockerFile: DockerReceiver/Dockerfile imageName: 'dockerreceiver:$(Build.BuildId)' useDefaultContext: false buildContext: ./ </code></pre> <p>And deploy our container image:</p> <pre><code class="language-yml">steps: - task: Docker@1 displayName: 'Push DockerReceiver' inputs: azureSubscriptionEndpoint: # subscription # azureContainerRegistry: nsbazurehosting.azurecr.io command: 'Push an image' imageName: 'dockerreceiver:$(Build.BuildId)' </code></pre> <p>And now our image is pushed! The final piece is to deploy in our build pipeline. Unfortunately, we don't have any built-in step for pushing an Azure Container Instance, but we can use the <a href="https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-cli?view=azure-devops">Azure CLI task</a> to do so:</p> <pre><code>az container create --resource-group nsbazurehosting --name nsbazurehosting --image nsbazurehosting.azurecr.io/dockerreceiver:latest --cpu 1 --memory 1.5 --registry-login-server nsbazurehosting.azurecr.io --registry-username $servicePrincipalId --registry-password $servicePrincipalKey </code></pre> <p>Not too horrible, but we do need to make sure we allow access to the service principal in the script to authenticate properly. You'll also notice that I was lazy and just picked the latest image, instead of the one based on the build ID.</p> <p>In terms of the container size, I've just kept things small (I'm cheap), but we can adjust the size as necessary. With this in place, we can push code, our container builds, and gets deployed to Azure.</p> <h3 id="usingazurecontainerinstances">Using Azure Container Instances</h3> <p>Unfortunately, ACIs are a bit of a red-headed stepchild in Azure. There's not a lot of documentation, and it seems like Azure is pushing us towards AKS and Kubernetes instead of individual instances.</p> <p>For larger teams or ones that want to completely own their infrastructure, we can build complex topologies, but I don't have time for that.</p> <p>ACIs also don't have any kind of dynamic scale-out, they're originally designed for spinning up and then down, and the billing reflects this. Recently, the price came down enough to be just about equal as running a same size App Service instance.</p> <p>However, we can't dynamically increase instances or sizes, so if you want something like that, Kubernetes will be the way to go. It's worth starting with just ACIs, since it won't require a k8s expert on staff.</p> <p>Up next, I'll be look at hashtag serverless with Azure Functions.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=0niL_b3HOqI:elcnP5-eUMw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=0niL_b3HOqI:elcnP5-eUMw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=0niL_b3HOqI:elcnP5-eUMw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=0niL_b3HOqI:elcnP5-eUMw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=0niL_b3HOqI:elcnP5-eUMw:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/0niL_b3HOqI" height="1" width="1" alt=""/> Building Messaging Endpoints in Azure: WebJobs https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/ Jimmy Bogard urn:uuid:3aa1e957-3eb5-be2b-4900-bba289104b44 Thu, 05 Sep 2019 18:23:14 +0000 <!--kg-card-begin: markdown--><p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Evaluating the Landscape</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">A Generic Host</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/">Azure WebJobs</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/">Azure Container Instances</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/">Azure Functions</a></li> </ul> <p>In the last post, I looked at creating a generic host endpoint that many of the deployed versions in Azure can share. By using a hosted service, we can then host NServiceBus in</p> <!--kg-card-begin: markdown--><p>Posts in this series:</p> <ul> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-evaluating-the-landscape/">Evaluating the Landscape</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-a-generic-host/">A Generic Host</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-webjobs/">Azure WebJobs</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-container-instances/">Azure Container Instances</a></li> <li><a href="https://jimmybogard.com/building-messaging-endpoints-in-azure-functions/">Azure Functions</a></li> </ul> <p>In the last post, I looked at creating a generic host endpoint that many of the deployed versions in Azure can share. By using a hosted service, we can then host NServiceBus in just about anything that can work with the .NET Core generic host. The differences then come to hosting and scaling models.</p> <p>First up is the closest we have to &quot;Platform-as-a-Service&quot; for background tasks - <a href="https://docs.microsoft.com/en-us/azure/app-service/webjobs-create">Azure WebJobs</a>. WebJobs can be any executable/script, but a very common model for building is to use the <a href="https://docs.microsoft.com/en-us/azure/app-service/webjobs-sdk-get-started">Azure WebJobs SDK</a>.</p> <p>Azure WebJobs are a fairly robust implementation of a hosted service - all of the triggers and execution models just piggyback on top of an <a href="https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs.Host/Hosting/JobHostService.cs"><code>IHostedService</code> implementation</a>. Instead of configuring individual executables with separate trigger models, we can host multiple &quot;jobs&quot; inside a single deployed host.</p> <p>So what does this mean for our humble generic host we created earlier?</p> <h3 id="pickingawebjobsmodel">Picking a WebJobs model</h3> <p>In the last post where we created our own <code>IHostedService</code> instance, this would be separate from the WebJobs SDK &quot;host&quot;. So we have a couple of options:</p> <ul> <li>Use our generic host inside a stock WebJob</li> <li>Use a trigger inside a WebJobs SDK host</li> </ul> <p>With the first option, we can <em>also</em> technically host the WebJobs SDK host, since multiple hosts are supported, so really the question becomes, &quot;will we have other WebJobs to execute or not?&quot;</p> <p>With the WebJobs SDK, we can host any kind of triggered job, from &quot;cron&quot;-style jobs, to message-driven, continuous and more.</p> <p>It's really dependent on the other things we have going on <em>outside</em> of our generic host. If we wanted to go strictly with Azure Web Jobs SDK, we'd have to create a &quot;continuous&quot; trigger and ditch our generic host. There's really not much benefit to that choice that I've found, so the path of least resistance is to simply host our generic host inside an executable deployed as-is.</p> <h3 id="configuringourwebjobsproject">Configuring our WebJobs Project</h3> <p>The WebJobs project is really just a console application project. It doesn't really matter up front, but we can choose to output as an assembly or EXE. Either way, the piece we need to connect just a plain console application to WebJobs is some way to <em>run</em> the WebJob. For this, we create a file called &quot;run.cmd&quot; with instructions on how to run the WebJob:</p> <pre><code>dotnet WebJobReceiver.dll </code></pre> <p>If we go pure EXE, we'd just have the EXE file name in our <code>run.cmd</code> file. Finally, we just need to make sure when we build/deploy our application, this file is included in the final published version. We can do this in our <code>.csproj</code> file:</p> <pre><code class="language-xml">&lt;ItemGroup&gt; &lt;Content Include=&quot;run.cmd&quot;&gt; &lt;CopyToOutputDirectory&gt;PreserveNewest&lt;/CopyToOutputDirectory&gt; &lt;/Content&gt; &lt;/ItemGroup&gt; </code></pre> <p>With this in place, we can <code>dotnet publish</code> our project and the we can deploy this out to Azure. But where should this get deployed?</p> <h3 id="deployingazurewebjobs">Deploying Azure WebJobs</h3> <p>Azure WebJobs can't be deployed just by themselves - they have to deployed as part of an Azure AppService (and not a Linux AppService, either). This is somewhat annoying - anything we push out has to be tied with an AppService, for both build and deployment. You can technically push out a WebJob independent of an AppService deployment - but it's a bit ugly.</p> <p>Azure WebJobs also have a few other disadvantages:</p> <ul> <li>They share the host AppService's resources</li> <li>They cannot scale independently of the host AppService</li> </ul> <p>In short, if we don't expect too many messages, a WebJob will be fine. We can also technically deploy a WebJob alongside a &quot;null&quot; parent AppService. The parent AppService can be blank/nothing. But again, it's a bit weird.</p> <p>Deploying a WebJob means we need to combine the packages of the parent AppService and child WebJobs, which is fairly straightforward to do in an Azure DevOps build pipeline. We first have a step to publish the AppService:</p> <pre><code class="language-yaml">steps: - task: DotNetCoreCLI@2 displayName: 'Publish Sender' inputs: command: publish arguments: '-c Release --no-build --no-restore -o $(Build.ArtifactStagingDirectory)' zipAfterPublish: false workingDirectory: Sender </code></pre> <p>Then one to publish the WebJob:</p> <pre><code class="language-yaml">steps: - task: DotNetCoreCLI@2 displayName: 'Publish WebJobReceiver' inputs: command: publish publishWebProjects: false projects: WebJobReceiver arguments: '-c Release --no-build --no-restore -o $(Build.ArtifactStagingDirectory)\Sender\app_data\jobs\continuous\WebJobReceiver' zipAfterPublish: false modifyOutputPath: false </code></pre> <p>Very important here is the output folder for our project - we have to publish our WebJob into a <em>very specific</em> folder in the parent AppService's published output. Finally, we publish the App Service, which includes the WebJob:</p> <pre><code class="language-yaml">steps: - task: PublishBuildArtifacts@1 displayName: 'Publish Artifact: Sender' inputs: PathtoPublish: '$(Build.ArtifactStagingDirectory)\Sender' ArtifactName: Sender </code></pre> <p>With this in place, we can deploy this artifact to an Azure App Service, and see our WebJob in the Azure portal:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/8/2019/Annotation%202019-09-05%20131744.png" alt="Azure WebJob inside Azure Portal"></p> <p>And our message endpoint is deployed!</p> <p>Inevitably the question comes up here - why not a ServiceBus trigger? Why go through all this? We'll get to this in a later post when we look at Azure Functions, but next up, Azure Container Instances.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=vzQsSVP7W44:OFMH9FcRi7k:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=vzQsSVP7W44:OFMH9FcRi7k:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=vzQsSVP7W44:OFMH9FcRi7k:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=vzQsSVP7W44:OFMH9FcRi7k:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=vzQsSVP7W44:OFMH9FcRi7k:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/vzQsSVP7W44" height="1" width="1" alt=""/> Integration Testing with xUnit https://jimmybogard.com/integration-testing-with-xunit/ Jimmy Bogard urn:uuid:8e44af14-0e6d-12b1-b0f6-867629f56eca Tue, 27 Aug 2019 20:40:13 +0000 <!--kg-card-begin: markdown--><p>A few years back, I had given up on <a href="https://xunit.net/">xUnit</a> in favor of <a href="https://fixie.github.io/">Fixie</a> because of the flexibility that Fixie provides. The xUnit project is highly opinionated, and geared strictly towards unit tests. It's great for that.</p> <p>A broader testing strategy includes much more than just unit tests. With Fixie,</p> <!--kg-card-begin: markdown--><p>A few years back, I had given up on <a href="https://xunit.net/">xUnit</a> in favor of <a href="https://fixie.github.io/">Fixie</a> because of the flexibility that Fixie provides. The xUnit project is highly opinionated, and geared strictly towards unit tests. It's great for that.</p> <p>A broader testing strategy includes much more than just unit tests. With Fixie, I can implement any of the <a href="http://xunitpatterns.com/">XUnit Test Patterns</a> to implement a comprehensive automated test strategy (rather than, say, having different test frameworks for different kinds of tests).</p> <p>In unit tests, each test method is highly isolated. In integration tests, this is usually not the case. Integration tests usually &quot;touch&quot; a lot more than a single class, and almost always, interact with other processes, files, and I/O. Unit tests are in-process, integration tests are out-of-process.</p> <p>We can write our integration tests like our unit tests, but it's not always advantageous to do so because:</p> <ul> <li>Shared state (database)</li> <li>Expensive initialization</li> </ul> <h3 id="atypicalintegrationtest">A typical integration test</h3> <p>If we look at a &quot;normal&quot; integration test we'd write on a more or less real-world project, its code would look something like:</p> <ul> <li>Set up data through the back door</li> <li>Set up data through the front door</li> <li>Build inputs</li> <li>Send inputs to system</li> <li>Verify direct outputs</li> <li>Verify side effects</li> </ul> <p>One very simple example looks something like:</p> <pre><code class="language-c#">[Fact] public async Task Should_edit_student() { var createCommand = new Create.Command { FirstMidName = &quot;Joe&quot;, LastName = &quot;Schmoe&quot;, EnrollmentDate = DateTime.Today }; var studentId = await SendAsync(createCommand); var editCommand = new Edit.Command { Id = studentId, FirstMidName = &quot;Mary&quot;, LastName = &quot;Smith&quot;, EnrollmentDate = DateTime.Today.AddYears(-1) }; await SendAsync(editCommand); var student = await FindAsync&lt;Student&gt;(studentId); student.ShouldNotBeNull(); student.FirstMidName.ShouldBe(editCommand.FirstMidName); student.LastName.ShouldBe(editCommand.LastName); student.EnrollmentDate.ShouldBe(editCommand.EnrollmentDate.GetValueOrDefault()); } </code></pre> <p>We're trying to test &quot;editing&quot;, but we're doing it through the commands actually used by the application. In a real app, ASP.NET Core would modelbind HTTP request parameters to the <code>Edit.Command</code> object. I don't care to test with modelbinding/HTTP, so we go one layer below - send the command down, and test the result.</p> <p>To do so, we need some setup, namely an original record to edit. The first set of code there does this through the front door, by sending the original &quot;Create&quot; command down.</p> <p>From here on out, each awaited action is in its own individual transaction, mimicking as much as possible how these interactions would occur in the real world.</p> <p>But with these styles of tests, there comes a couple of problems:</p> <ul> <li>Some setup I only want to do once, for <em>all</em> tests, similar to the real world</li> <li>Assertions are more complicated as these interactions can have many side effects</li> </ul> <p>The first problem can be straightforward, but in the second, I usually tackle by switching my tests to a different pattern - <a href="http://xunitpatterns.com/Testcase%20Class%20per%20Fixture.html">&quot;Testcase Class per Fixture&quot;</a>. This lets me have a common setup with multiple test methods that each have different specific assertions.</p> <p>With this in mind, how might we address both issues, with xUnit?</p> <h3 id="sharedfixturedesign">Shared Fixture Design</h3> <p>Each of these issues basically comes down to sharing context between tests. And there are a few ways to do these in xUnit - with collection fixtures and class fixtures.</p> <p>Collection fixtures allow me to share context amongst many tests. Class fixtures allow me to share context in a class. The lifecycle of each determines what fixture I use for when:</p> <ul> <li><a href="https://xunit.net/docs/shared-context#collection-fixture">Collection fixtures</a> are set up once per collection</li> <li><a href="https://xunit.net/docs/shared-context#class-fixture">Class fixtures</a> are set up once per test class</li> <li><a href="https://xunit.net/docs/shared-context#constructor">Constructors/lifetimes</a> are set up once per test method</li> </ul> <p>That last one is important - if I do set up in an xUnit constructor or <code>IAsyncLifetime</code> on a test class, it executes once per test method - probably not what I want! What I'm looking for looks something like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/7/2019/Picture0071.png" alt="Shared Context at right scope"></p> <p>In each class, the Fixture contains the &quot;Arrange/Act&quot; parts of the test, and each test method contains the &quot;Assert&quot; part. That way, our test method names can describe the assertion from a behavioral perspective.</p> <p>For our shared context, we'd want to create a collection fixture. This could include:</p> <ul> <li>Database stuff</li> <li>App stuff</li> <li>Configuration stuff</li> <li>Container stuff</li> </ul> <p>Anything that gets set up in your application's startup is a good candidate for our shared context. Here's one example of a collection definition that uses both ASP.NET Core hosting stuff and <a href="https://github.com/Mongo2Go/Mongo2Go">Mongo2Go</a></p> <pre><code class="language-c#">public class SharedFixture : IAsyncLifetime { private MongoDbRunner _runner; public MongoClient Client { get; private set; } public IMongoDatabase Database { get; private set; } public async Task InitializeAsync() { _runner = MongoDbRunner.Start(); Client = new MongoClient(_runner.ConnectionString); Database = Client.GetDatabase(&quot;db&quot;); var hostBuilder = Program.CreateWebHostBuilder(new string[0]); var host = hostBuilder.Build(); ServiceProvider = host.Services; } public Task DisposeAsync() { _runner?.Dispose(); _runner = null; return Task.CompletedTask; } } </code></pre> <p>Then we create a collection fixture definition in a separate class:</p> <pre><code class="language-c#">[CollectionDefinition(nameof(SharedFixture))] public class SharedFixtureCollection : ICollectionFixture&lt;SharedFixture&gt; { } </code></pre> <p>Now that we have a definition of a shared fixture, we can try to use it in test. But first, we need to build out the test class fixture.</p> <h3 id="classfixturedesign">Class Fixture Design</h3> <p>For a class fixture, we're doing Arrange/Act as part of its design. But that also means we'll need to use our collection fixture. Our class fixture needs to use our collection fixture, and xUnit supports this.</p> <p>Here's an example of a class fixture, inside a test class:</p> <pre><code class="language-c#">public class MyTestClass_For_Some_Context : IClassFixture&lt;MyTestClass_For_Some_Context.Fixture&gt; { private readonly Fixture _fixture; public MyTestClass_For_Some_Context(Fixture fixture) =&gt; _fixture = fixture; [Collection(nameof(SharedFixture)] public class Fixture : IAsyncLifetime { private readonly SharedFixture _sharedFixture; public Fixture(SharedFixture sharedFixture) =&gt; _sharedFixture = sharedFixture; public async Task InitializeAsync() { // Arrange, Act } public Order Order { get; set; } public OtherContext Context { get; set; } // no need for DisposeAsync } [Fact] public void Should_have_one_behavior() { // Assert } [Fact] public void Should_have_other_behavior() { // Assert } } </code></pre> <p>The general idea is that fixtures must be supplied via the constructor, so I have to create a bit of a nested doll here. The class fixture takes the shared fixture, then my test class takes the class fixture. I use the <code>InitializeAsync</code> method for the &quot;Arrange/Act&quot; part of my test, then capture any direct/indirect output on properties on my <code>Fixture</code> class.</p> <p>Then, in each test method, I only have asserts which look at the values in the <code>Fixture</code> instance supplied in my test method.</p> <p>With this setup, my &quot;Arrange/Act&quot; parts are only executed once per test class. Each test method can be then very explicit about the behavior to test (the test method name) and assert only one specific aspect.</p> <h3 id="itsugly">It's Ugly</h3> <p>Writing tests this manner allows me to fit inside xUnit's lifecycle configuration - but it's super ugly. I have attributes with magic values (the collection name), marker interfaces, nested classes (though this was my doing) and in general a lot of hoops.</p> <p>What I would <em>like</em> to do is just have an easy way to define global, shared state, and define a separate lifecycle for integration tests. I don't mind using the <code>IAsyncLifetime</code> part but it's a bit annoying to have to work through a separate fixture class to do so.</p> <p>So while it's <em>feasible</em> to write tests in this way, I'd suggest avoiding it. Having global shared state is possible, but combining those with class fixtures is just too complicated.</p> <p>Instead, either use a framework more suited for this style of tests (Fixie or any of the BDD-style libraries), or just combine all asserts into one single test method.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=-JzxV9B8dSs:yzUXNEx8ABM:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=-JzxV9B8dSs:yzUXNEx8ABM:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=-JzxV9B8dSs:yzUXNEx8ABM:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=-JzxV9B8dSs:yzUXNEx8ABM:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=-JzxV9B8dSs:yzUXNEx8ABM:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/-JzxV9B8dSs" height="1" width="1" alt=""/> AutoMapper LINQ Support Deep Dive https://jimmybogard.com/automapper-linq-support-deep-dive/ Jimmy Bogard urn:uuid:8b2d3c25-aae6-44bb-fdf8-bdb05d788345 Fri, 16 Aug 2019 14:21:02 +0000 <!--kg-card-begin: markdown--><p>My favorite feature of AutoMapper is its <a href="https://docs.automapper.org/en/stable/Queryable-Extensions.html">LINQ support</a>. If you're using AutoMapper, and not using its queryable extensions, you're missing out!</p> <p>Normal AutoMapper usage is something like:</p> <pre><code class="language-c#">var dest = _mapper.Map&lt;Dest&gt;(source); </code></pre> <p>Which would be equivalent to:</p> <pre><code class="language-c#">var dest = new Dest { Thing = source.Thing, Thing2 = source.</code></pre> <!--kg-card-begin: markdown--><p>My favorite feature of AutoMapper is its <a href="https://docs.automapper.org/en/stable/Queryable-Extensions.html">LINQ support</a>. If you're using AutoMapper, and not using its queryable extensions, you're missing out!</p> <p>Normal AutoMapper usage is something like:</p> <pre><code class="language-c#">var dest = _mapper.Map&lt;Dest&gt;(source); </code></pre> <p>Which would be equivalent to:</p> <pre><code class="language-c#">var dest = new Dest { Thing = source.Thing, Thing2 = source.Thing2, FooBarBaz = source.Foo?.Bar?.Baz }; </code></pre> <p>The main problem people run into here is that typically that source object is some object filled in from a data source, whether it's a relational or non-relational source. This implies that the original fetch pulled a <em>lot</em> more information back out than we needed to.</p> <p>Enter projections!</p> <h3 id="constrainingdatafetchingwithprojection">Constraining data fetching with projection</h3> <p>If we wanted to fetch <em>only</em> the data we needed from the server, the quickest path to do so would be a projection at the SQL level:</p> <pre><code class="language-sql">SELECT src.Thing, src.Thing2, bar.Baz as FooBarBaz FROM Source src LEFT OUTER JOIN Foo foo on src.FooId = foo.Id LEFT OUTER JOIN Bar bar on foo.BarId = bar.Id </code></pre> <p>Well...that's already getting a bit ugly. Plus, all that joining information is already represented in our object model, why do we have to drop down to such a low level?</p> <p>If we're using LINQ with our data provider, then we can use the <a href="https://docs.microsoft.com/en-us/dotnet/api/system.linq.iqueryprovider?view=netframework-4.8">query provider</a> to perform a <a href="https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/linq/basic-linq-query-operations#selecting-projections">Select projection</a></p> <pre><code class="language-c#">var dest = await dbContext.Destinations .Where(d =&gt; d.Id = id) .Select(d =&gt; new Dest { Thing = source.Thing, Thing2 = source.Thing2, FooBarBaz = source.Foo.Bar.Baz. }) .FirstOrDefaultAsync(); </code></pre> <p>And underneath the covers, the query provider parses the <code>Select</code> expression tree and converts that to SQL, so that the <code>SELECT</code> query against the database is <em>only</em> what we need and our query provider skips that data/domain model altogether to go from SQL straight to our destination type.</p> <p>All is great! Except, of course, we're back to boring code, but this time, it's projections!</p> <h3 id="automapperlinqsupport">AutoMapper LINQ Support</h3> <p>Enter AutoMapper's LINQ support. Traditional AutoMapper usage is in in-memory objects, but several years ago, we also added the ability to automatically build out those <code>Select</code> projections for you as well:</p> <pre><code class="language-c#">// Before var dest = await dbContext.Destinations .Where(d =&gt; d.Id = id) .Select(d =&gt; new Dest { // Just automap this dumb junk Thing = source.Thing, Thing2 = source.Thing2, FooBarBaz = source.Foo.Bar.Baz. }) .FirstOrDefaultAsync(); // After var dest = await dbContext.Destinations .Where(d =&gt; d.Id = id) .ProjectTo&lt;Dest&gt;() // oh so pretty .FirstOrDefaultAsync(); </code></pre> <p>These two statements are exactly equivalent. AutoMapper behind the scenes builds up the <code>Select</code> projection expression tree exactly how you would do this yourself, <em>except</em> it uses the AutoMapper mapping configuration to do so.</p> <p>We get the best of both worlds here - enforcement of our destination type conventions, got rid of all that dumb projection code, and safety with configuration validation.</p> <p>But how does this all work?</p> <h3 id="underneaththecovers">Underneath the Covers</h3> <p>Behind the scenes, it all starts with extending out <code>IQueryable</code> (not <code>IEnumerable</code>) to create the <code>ProjectTo</code> method:</p> <pre><code class="language-c#">public static class Extensions { public static IQueryable&lt;TDestination&gt; ProjectTo&lt;TDestination&gt;( this IQueryable source, IConfigurationProvider configuration) =&gt; new ProjectionExpression(source, configuration.ExpressionBuilder) .To&lt;TDestination&gt;(); } </code></pre> <p>I've pushed all the projection logic to a separate object, <code>ProjectionExpression</code>. One critical thing we need to do is make sure we're returning the exact same <code>IQueryable</code> instance at the end, so that you can do the extended behaviors that many ORMs support, such as async queries.</p> <p>Next, we build up an expression tree that needs to be fed in to <code>IQueryable.Select</code>:</p> <pre><code class="language-c#">public class ProjectionExpression { // ctor etc public IQueryable&lt;TResult&gt; To&lt;TResult&gt;() { return (IQueryable&lt;TResult&gt;) _builder.GetMapExpression( _source.ElementType, typeof(TResult)) .Aggregate(_source, Select); } } </code></pre> <p><code>GetMapExpression</code> return <code>IEnumerable&lt;LambdaExpression&gt;</code>, which we then use to reduce to the final <code>Select</code> call on the <code>IQueryable</code> instance:</p> <pre><code class="language-c#">private static IQueryable Select(IQueryable source, LambdaExpression lambda) =&gt; source.Provider.CreateQuery( Expression.Call( null, QueryableSelectMethod.MakeGenericMethod(source.ElementType, lambda.ReturnType), new[] { source.Expression, Expression.Quote(lambda) } ) ); </code></pre> <p>Calling in to the underlying <code>IQueryProvider</code> instance ensures that we're using the query provider, instead of just plain <code>Queryable</code>, to create the query. The underlying query provider often does &quot;more&quot; things that need to be tracked, and this also makes sure that the final <code>IQueryable</code> is the one from the original query provider - not the built-in one in .NET.</p> <p>The original <code>GetMapExpression</code> method on the <code>IExpressionBuilder</code> instance isn't too too exciting, we do some caching of building out expressions, and have a chain of responsibility pattern in place to decide how to bind each destination member based on different rules (things that need to be mapped, enumerables, strings etc.).</p> <p>We start by finding the mapping configuration for the source/destination type pair, then for each destination member, pick an appropriate expression building strategy based on that the source/destination type pair for each member, including any member configuration you've put in place.</p> <p>The end result is a <code>Select</code> call to the correct <code>IQueryProvider</code> instance, with a fully built-out expression tree, that the underlying <code>IQueryProvider</code> instance can then take and build out the correct data projection straight from the server.</p> <p>And because we simply chain off of <code>IQueryProvider</code>, we can extend <em>any</em> query provider that also handles <code>Select</code>.</p> <h3 id="whenthingsgowrong">When things go wrong</h3> <p>The expression trees we build up need to be parsed and understood by the underlying query provider. This is not always the case, and if you search EF or EF Core for issues containing the words &quot;AutoMapper&quot;, you'll find many, many issues with that expression parsing/translation (<a href="https://github.com/aspnet/EntityFrameworkCore/issues?utf8=%E2%9C%93&amp;q=is%3Aissue+automapper">&gt; 100 in the EF Core repository alone</a>). It's not a perfect system, and expression tree parsing is a hard problem.</p> <p>When you run into an issue with the expression tree, you'll get some weird wonky error message from the query provider. When you do, the easiest thing to diagnose is to drop back down in to &quot;raw&quot; LINQ, calling <code>Select</code> manually. Once you do, you can remove members one-by-one until you find the underlying problem mapping, and join dozens of others opening issues in the appropriate GitHub repository :).</p> <p>For nearly all cases of straightforward projection, it works great! So if you're using AutoMapper, check out <code>ProjectTo</code>, as it's the greatest (mapping) thing since sliced bread.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=dJSTLTw1f7U:TAsqjCiYwuw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=dJSTLTw1f7U:TAsqjCiYwuw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=dJSTLTw1f7U:TAsqjCiYwuw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=dJSTLTw1f7U:TAsqjCiYwuw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=dJSTLTw1f7U:TAsqjCiYwuw:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/dJSTLTw1f7U" height="1" width="1" alt=""/> AutoMapper 9.0 Released https://jimmybogard.com/automapper-9-0-released/ Jimmy Bogard urn:uuid:13c72887-343f-9303-3c2e-810464901497 Mon, 12 Aug 2019 13:35:37 +0000 <!--kg-card-begin: markdown--><p>As with the other major releases of AutoMapper, this one introduces breaking API changes with relatively few other additions. The major breaking changes (see <a href="https://docs.automapper.org/en/stable/9.0-Upgrade-Guide.html">upgrade guide</a> for details) include:</p> <ul> <li>Removing the static API</li> <li>Removing &quot;dynamic&quot; maps (automatically created maps)</li> <li>Fix on <code>IMappingAction</code> to include a <code>ResolutionContext</code> parameter</li> </ul> <p>The</p> <!--kg-card-begin: markdown--><p>As with the other major releases of AutoMapper, this one introduces breaking API changes with relatively few other additions. The major breaking changes (see <a href="https://docs.automapper.org/en/stable/9.0-Upgrade-Guide.html">upgrade guide</a> for details) include:</p> <ul> <li>Removing the static API</li> <li>Removing &quot;dynamic&quot; maps (automatically created maps)</li> <li>Fix on <code>IMappingAction</code> to include a <code>ResolutionContext</code> parameter</li> </ul> <p>The motivation behind removing the static API is that most new users to AutoMapper will do so through the <a href="https://github.com/AutoMapper/AutoMapper.Extensions.Microsoft.DependencyInjection">DI extensions packages</a>, using <code>services.AddAutoMapper(typeof(Startup))</code>. If you want a static usage of AutoMapper, you can still do so, but you're in charge of creating a static holder and referencing that.</p> <p>The bigger, more unpopular change, is removing dynamic maps. Dynamic maps have a long history in AutoMapper, and it was actually removed once before added back in. I've never used them, will never use them, and they go against the underlying <a href="https://jimmybogard.com/automappers-design-philosophy/">design philosophy of the project</a>. I don't want to support features I don't use or recommend, so this is getting the axe.</p> <blockquote> <p>Let this be a lesson to other OSS authors - never add features to your library you'd recommend avoiding, no matter how much a feature is &quot;wanted&quot;.</p> </blockquote> <p>I've also released the latest version of the DI package, and will roll out updates to other ancillary packages shortly.</p> <p>Enjoy! Or not ;)</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=LTTjSRjDbsQ:RHn86KtMkgQ:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=LTTjSRjDbsQ:RHn86KtMkgQ:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=LTTjSRjDbsQ:RHn86KtMkgQ:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=LTTjSRjDbsQ:RHn86KtMkgQ:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=LTTjSRjDbsQ:RHn86KtMkgQ:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/LTTjSRjDbsQ" height="1" width="1" alt=""/> Immutability in Message Types https://jimmybogard.com/immutability-in-message-types/ Jimmy Bogard urn:uuid:190d100b-6d15-181d-fb00-f77ca8129d1a Thu, 08 Aug 2019 18:50:31 +0000 <!--kg-card-begin: markdown--><p>In just about every custom I've worked with, eventually the topic of immutability in messages comes up. Messages are immutable in concept, that is, we shouldn't change them (except in the case of <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/DocumentMessage.html">document messages</a>). Since messages are generally immutable in concept, why not make them immutable in our applications</p> <!--kg-card-begin: markdown--><p>In just about every custom I've worked with, eventually the topic of immutability in messages comes up. Messages are immutable in concept, that is, we shouldn't change them (except in the case of <a href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/DocumentMessage.html">document messages</a>). Since messages are generally immutable in concept, why not make them immutable in our applications dealing with the messages themselves?</p> <p>Immutabilty has a number of benefits, ask <a href="https://blog.ploeh.dk/">Mark Seemann</a> laid out in a <a href="https://discuss.particular.net/t/support-for-immutable-messages/1270">discussion question</a> on the NServiceBus discussion forum. Typically, a message/command/event is defined in type-oriented systems as a <a href="https://martinfowler.com/eaaCatalog/dataTransferObject.html">Data Transfer Object (DTO)</a>:</p> <pre><code class="language-c#">public class CreateOrder: ICommand { public int OrderId { get; set; } public DateTime Date { get; set; } public int CustomerId { get; set; } } </code></pre> <p>Above is an example of a command message contract defined in NServiceBus. There are a number of problems with mutable message types:</p> <ul> <li>Message receivers can modify the deserialized representation, &quot;changing the facts&quot;</li> <li>No invariants specified. What's required? What's valid? What can be null?</li> </ul> <p>The natural reaction to these issues are to add behavior to our message types.</p> <h3 id="buildingimmutability">Building Immutability</h3> <p>If we want immutability, and record-type-like behavior, we'll need to do a few things. First is to worry about serialization/deserialization. With DTOs, it's rather easy to guarantee that our wire format <em>can</em> be serialized. As long as we stick to &quot;normal&quot; primitives, public setters, and &quot;normal&quot; collection types, everything works well.</p> <p>But as anyone who has tried to create fully encapsulated domain models that are also data models, we encapsulation brings pain with dealing with data hydration. In encapsulated domain models, there's usually some level of compromise I undergo to get my domain model <em>good enough</em> for our purposes. It will never be perfect - but perfection is not the goal, shipping is.</p> <p>To build immutabilty in C#, we can guarantee this with interfaces (<a href="https://docs.particular.net/nservicebus/messaging/immutable-messages">full sample from the NServiceBus docs</a>):</p> <pre><code class="language-c#">public interface ICreateOrder : ICommand { int OrderId { get; } DateTime Date { get; } int CustomerId { get; } } </code></pre> <p>We need to use an interface because many serializers don't support C#'s read-only properties. If you define your message type as readonly properties:</p> <pre><code class="language-c#">public class CreateOrder : ICommand { public CreateOrder(int orderId, DateTime date, int customerId) { OrderId = orderId; Date = date; CustomerId = customerId; } public int OrderId { get; } public DateTime Date { get; } public int CustomerId { get; } } </code></pre> <p>Then behind the scenes, the C# compiler creates a readonly backing field, and the constructor sets this readonly field. Instead, you have to create private setters to allow deserialization to even be <em>possible</em></p> <pre><code class="language-c#">public class CreateOrder : ICreateOrder { public CreateOrder(int orderId, DateTime date, int customerId) { OrderId = orderId; Date = date; CustomerId = customerId; } public int OrderId { get; private set; } public DateTime Date { get; private set; } public int CustomerId { get; private set; } } </code></pre> <p>Then your handler just uses that readonly interface. I also see that my IDE sees that those private setters are redundant for code purposes, but I can't remove them, or my serialization won't work.</p> <p>We get immutability, but at a price - more code to write, and funny, still not semantically valid code that has to know how exactly our serializers work. Is it even worth it?</p> <h3 id="whatevenisamessage">What Even Is A Message</h3> <p>In many systems, we substitute &quot;type&quot; for &quot;message&quot; or &quot;contract&quot;. But that's not what a message is - our message is what's on the wire. The schema or contract is the agreement or specification for what that message should look like, and expected application-level semantics around processing that message.</p> <p>In the web API world, quite a lot of work has gone into building specifications around APIs, first with <a href="https://swagger.io/">Swagger</a> and now around the <a href="https://www.openapis.org/">Open API specification</a>. In fact, Swagger is now built around Open API Spec (OAS). A wealth of tooling now supports defining, describing, and using Open API-spec-defined APIs.</p> <p>Not so much on the durable message side of things. Many specifications exist on message protocols, but not the messages themselves. It's really up to producers and consumers to build specifications on the <em>content</em> of the messages.</p> <p>It's a common attempt in messaging systems to try and define the schema <em>as</em> the type, but the problem is a type system is semantically similar, but not equivalent, to a schema. You wind up having the issues that <a href="https://en.wikipedia.org/wiki/Web_Services_Description_Language">WSDL</a> had - schemas that had too strict rules or assumptions on specific runtime's type systems. Anyone that's tried to consume a Java web service from .NET will know how off this was.</p> <p>Ultimately, the type is <em>not</em> the message, but a convenient mechanism of describing the schema and providing serialization. It isn't the message, though, and we should refrain from trying to put object-oriented concepts on top of it. Those don't translate to wire formats, or even message schema specifications.</p> <h3 id="schemasandcontractsandclassesohmy">Schemas and Contracts and Classes (oh my)</h3> <p>The third <a href="https://en.wikipedia.org/wiki/Service-orientation#Essential_characteristics">tenet of Service Oriented Architecture</a> is:</p> <blockquote> <p>Services share schema and contract, not class (or type)</p> </blockquote> <p>But what immutability in C# is trying to do is share and enforce classes/types amongst producers/consumers, moving the DTO past just a serialization helper to an actual behavioral object. Building behavior into our serialization type would break a tenet of SOA.</p> <p>Our producers and consumers should instead focus on the message schema and contract, and application protocols around such, than the manifestation in a specific type system. Ideally, we could define our schemas and contracts, and share those, but we're still a ways off from being able to do so (like you could with WSDL or Open API).</p> <p>Until then, we can share types, but <em>only</em> if we understand that the type is not the message, the schema defines the message, and the type is only a convenience mechanism to assist in describing and materializing a message.</p> <p>Today, just use a DTO as your message type, keep it simple, and keep an eye out for efforts to help define (and validate) async messages, and tooling to help define message types/objects based on your target framework/runtime of choice.</p> <!--kg-card-end: markdown--><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=bSzQt2V-6iY:mQm6m-88yKs:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=bSzQt2V-6iY:mQm6m-88yKs:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=bSzQt2V-6iY:mQm6m-88yKs:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=bSzQt2V-6iY:mQm6m-88yKs:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=bSzQt2V-6iY:mQm6m-88yKs:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/bSzQt2V-6iY" height="1" width="1" alt=""/> Gocode Vim Plugin and Go Modules https://blog.jasonmeridth.com/posts/gocode-vim-plugin-and-go-modules/ Jason Meridth urn:uuid:c9be1149-395b-e365-707e-8fa2f475093c Sat, 05 Jan 2019 17:09:26 +0000 <p>I recently purchased <a href="https://lets-go.alexedwards.net/">Let’s Go</a> from Alex Edwards. I wanted an end-to-end Golang website tutorial. It has been great. Lots learned.</p> <p>Unfortunately, he is using Go’s modules and the version of the gocode Vim plugin I was using did not support Go modules.</p> <h3 id="solution">Solution:</h3> <p>Use <a href="https://github.com/stamblerre/gocode">this fork</a> of the gocode Vim plugin and you’ll get support for Go modules.</p> <p>I use <a href="https://github.com/junegunn/vim-plug">Vim Plug</a> for my Vim plugins. Huge fan of Vundle but I like the post-actions feature of Plug. I just had to change one line of my vimrc and re-run updates.</p> <div class="language-diff highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="gh">diff --git a/vimrc b/vimrc index 3e8edf1..8395705 100644 </span><span class="gd">--- a/vimrc </span><span class="gi">+++ b/vimrc </span><span class="gu">@@ -73,7 +73,7 @@ endif </span> let editor_name='nvim' Plug 'zchee/deoplete-go', { 'do': 'make'} endif <span class="gd">- Plug 'nsf/gocode', { 'rtp': 'vim', 'do': '~/.config/nvim/plugged/gocode/vim/symlink.sh' } </span><span class="gi">+ Plug 'stamblerre/gocode', { 'rtp': 'vim', 'do': '~/.vim/plugged/gocode/vim/symlink.sh' } </span> Plug 'godoctor/godoctor.vim', {'for': 'go'} " Gocode refactoring tool " } </code></pre></div></div> <p>That is the line I had to change then run <code class="highlighter-rouge">:PlugUpdate!</code> and the new plugin was installed.</p> <p>I figured all of this out due to <a href="https://github.com/zchee/deoplete-go/issues/134#issuecomment-435436305">this comment</a> by <a href="https://github.com/cippaciong">Tommaso Sardelli</a> on Github. Thank you Tommaso.</p> Raspberry Pi Kubernetes Cluster - Part 4 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4/ Jason Meridth urn:uuid:56f4fdcb-5310-bbaa-c7cf-d34ef7af7682 Fri, 28 Dec 2018 16:35:23 +0000 <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1">Raspberry Pi Kubenetes Cluster - Part 1</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2">Raspberry Pi Kubenetes Cluster - Part 2</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3">Raspberry Pi Kubenetes Cluster - Part 3</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4">Raspberry Pi Kubenetes Cluster - Part 4</a></p> <p>Howdy again.</p> <p>In this post I’m going to show you how to create a docker image to run on ARM architecture and also how to deploy it and view it.</p> <p>To start please view my basic flask application called fl8 <a href="https://github.com/meridth/fl8">here</a></p> <p>If you’d like to clone and use it:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git@github.com:meridth/fl8.git <span class="o">&amp;&amp;</span> <span class="nb">cd </span>fl8 </code></pre></div></div> <h1 id="arm-docker-image">ARM docker image</h1> <p>First we need to learn about QEMU</p> <h3 id="what-is-qemu-and-qemu-installation">What is QEMU and QEMU installation</h3> <p>QEMU (Quick EMUlator) is an Open-Source hosted hypervisor, i.e. an hypervisor running on a OS just as other computer programs, which performs hardware virtualization. QEMU emulates CPUs of several architectures, e.g. x86, PPC, ARM and SPARC. It allows the execution of non-native target executables emulating the native execution and, as we require in this case, the cross-building process.</p> <h3 id="base-docker-image-that-includes-qemu">Base Docker image that includes QEMU</h3> <p>Please open the <code class="highlighter-rouge">Dockerfile.arm</code> and notice the first line: <code class="highlighter-rouge">FROM hypriot/rpi-alpine</code>. This is a base image that includes the target qemu statically linked executable, <em>qemu-arm-static</em> in this case. I chose <code class="highlighter-rouge">hypriot/rpi-alpine</code> because the alpine base images are much smaller than other base images.</p> <h3 id="register-qemu-in-the-build-agent">Register QEMU in the build agent</h3> <p>To add QEMU in the build agent there is a specific Docker Image performing what we need, so just run in your command line:</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run <span class="nt">--rm</span> <span class="nt">--privileged</span> multiarch/qemu-user-static:register <span class="nt">--reset</span> </code></pre></div></div> <h3 id="build-image">Build image</h3> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker build <span class="nt">-f</span> ./Dockerfile.arm <span class="nt">-t</span> meridth/rpi-fl8 <span class="nb">.</span> </code></pre></div></div> <p>And voila! You now have an image that will run on Raspberry Pis.</p> <h1 id="deployment-and-service">Deployment and Service</h1> <p><code class="highlighter-rouge">/.run-rpi.sh</code> is my script where I run a Kubernetes deployment with 3 replicas and a Kubernetes service. Please read <code class="highlighter-rouge">fl8-rpi-deployment.yml</code> and <code class="highlighter-rouge">fl8-rpi-service.yml</code>. They are only different from the other deployment and service files by labels. Labels are key/vaule pairs that can be used by selectors later.</p> <p>The deployment will pull my image from <code class="highlighter-rouge">meridth/rpi-fl8</code> on dockerhub. If you have uploaded your docker image somewhere you can change the deployment file to pull that image instead.</p> <h1 id="viewing-application">Viewing application</h1> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl get pods </code></pre></div></div> <p>Choose a pod to create the port forwarding ssh tunnel.</p> <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>kubectl port-forward <span class="o">[</span>pod-name] <span class="o">[</span>app-port]:[app-port] </code></pre></div></div> <p>Example: <code class="highlighter-rouge">kubectl port-forward rpi-fl8-5d84dd8ff6-d9tgz 5010:5010</code></p> <p>The final result when you go to <code class="highlighter-rouge">http://localhost:5010</code> in a browser.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/port_forward.png" alt="port forward result" /></p> <p>Hope this helps someone else. Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4/">Raspberry Pi Kubernetes Cluster - Part 4</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on December 28, 2018.</p> Raspberry Pi Kubernetes Cluster - Part 3 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3/ Jason Meridth urn:uuid:c12fa6c5-8e7a-6c5d-af84-3c0452cf4ae4 Mon, 24 Dec 2018 21:59:23 +0000 <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1">Raspberry Pi Kubenetes Cluster - Part 1</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2">Raspberry Pi Kubenetes Cluster - Part 2</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3">Raspberry Pi Kubenetes Cluster - Part 3</a></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-4">Raspberry Pi Kubenetes Cluster - Part 4</a></p> <p>Well, it took me long enough to follow up on my previous posts. There are reasons.</p> <ol> <li>The day job has been fun and busy</li> <li>Family life has been fun and busy</li> <li>I kept hitting annoying errors when trying to get my cluster up and running</li> </ol> <p>The first two reasons are the usual reasons a person doesn’t blog. :)</p> <p>The last one is what prevented me from blogging sooner. I had mutliple issues when trying to use <a href="https://rak8s.io">rak8s</a> to setup my cluster. I’m a big fan of <a href="https://ansible.com">Ansible</a> and I do not like running scripts over and over. I did read <a href="https://gist.github.com/alexellis/fdbc90de7691a1b9edb545c17da2d975">K8S on Raspbian Lite</a> from top to bottom and realized automation would make this much better.</p> <!--more--> <h3 id="the-issues-i-experienced">The issues I experienced:</h3> <h4 id="apt-get-update-would-not-work">apt-get update would not work</h4> <p>I started with the vanilla Raspbian lite image to run on my nodes and had MANY MANY issues with running <code class="highlighter-rouge">apt-get update</code> and <code class="highlighter-rouge">apt-get upgrade</code>. The mirrors would disconnect often and just stall. This doesn’t help my attempted usage of rak8s which does both on the <code class="highlighter-rouge">cluster.yml</code> run (which I’ll talk about later).</p> <h4 id="rak8s-changes-needed-to-run-hypriotos-and-kubernetes-1131">rak8s changes needed to run HypriotOS and kubernetes 1.13.1</h4> <p>Clone the repo locally and I’ll walk you through what I changed to get <a href="https://rak8s.io">rak8s</a> working for me and HypriotOS.</p> <p>Change the following files:</p> <ul> <li><code class="highlighter-rouge">ansible.cfg</code> <ul> <li>change user from <code class="highlighter-rouge">pi</code> to <code class="highlighter-rouge">pirate</code></li> </ul> </li> <li><code class="highlighter-rouge">roles/kubeadm/tasks/main.yml</code> <ul> <li>add <code class="highlighter-rouge">ignore_errors: True</code> to <code class="highlighter-rouge">Disable Swap</code> task</li> <li>I have an open PR for this <a href="https://github.com/rak8s/rak8s/pull/46">here</a></li> </ul> </li> <li><code class="highlighter-rouge">group_vars/all.yml</code> <ul> <li>Change <code class="highlighter-rouge">kubernetes_package_version</code> to <code class="highlighter-rouge">"1.13.1-00"</code></li> <li>Change <code class="highlighter-rouge">kubernetes_version</code> to <code class="highlighter-rouge">"v1.13.1"</code></li> </ul> </li> </ul> <p>After you make those changes you can run <code class="highlighter-rouge">ansible-playbook cluster.yml</code> as the rak8s documentation suggests. Please note this is after you edit <code class="highlighter-rouge">inventory</code> and copy <code class="highlighter-rouge">ssh</code> keys to raspberry pis.</p> <h4 id="flannel-networking-issue-once-nodes-are-up">Flannel networking issue once nodes are up</h4> <p>After I get all of the nodes up I noticed the master node was marked ast <code class="highlighter-rouge">NotReady</code> and when I ran <code class="highlighter-rouge">kubectl describe node raks8000</code> I saw the following error:</p> <blockquote> <p>KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized</p> </blockquote> <p>This error is known in kubernetes &gt; 1.12 and flannel v0.10.0. It is mentioned in <a href="https://github.com/coreos/flannel/issues/1044">this issue</a>. The fix is specifically mentioned <a href="https://github.com/coreos/flannel/issues/1044#issuecomment-427247749">here</a>. It is to run the following command</p> <p><code class="highlighter-rouge">kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml</code></p> <p>After readin the issue it seems the fix will be in the next version of flannel and will be backported to v0.10.0.</p> <h1 id="a-running-cluster">A running cluster</h1> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/running_cluster.png" alt="Running Cluster" /></p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-3/">Raspberry Pi Kubernetes Cluster - Part 3</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on December 24, 2018.</p> MVP how minimal https://lostechies.com/ryansvihla/2018/12/20/mvp-how-minimal/ Los Techies urn:uuid:3afadd9e-98a7-8d37-b797-5403312a2999 Thu, 20 Dec 2018 20:00:00 +0000 MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP: <p>MVPs or Minimum Viable Products are pretty contentious ideas for something seemingly simple. Depending on background and where pepole are coming from experience wise those terms carry radically different ideas. In recent history I’ve seen up close two extreme constrasting examples of MVP:</p> <ul> <li>Mega Minimal: website and db, mostly manual on the backend</li> <li>Mega Mega: provisioning system, dynamic tuning of systems via ML, automated operations, monitoring a few others I’m leaving out.</li> </ul> <h2 id="feedback">Feedback</h2> <p>If we’re evaluating which approach gives us more feedback, Mega Minimal MVP is gonna win hands down here. Some will counter they don’t want to give people a bad impression with a limited product and that’s fair, but it’s better than no impression (the dreaded never shipped MVP). The Mega Mega MVP I referenced took months to demo. only had one of those checkboxes setup and wasn’t ever demod again. So we can categorical say that failed at getting any feedback.</p> <p>Whereas the Mega Minimal MVP, got enough feedback and users for the founders to realize that wasn’t a business for them. Better than after hiring a huge team and sinking a million plus into dev efforts for sure. Not the happy ending I’m sure you all were expecting, but I view that as mission accomplished.</p> <h2 id="core-value">Core Value</h2> <ul> <li>Mega Minimal, they only focused on a single feature, executed well enough that people gave them some positive feedback, but not enough to justify automating everything.</li> <li>Mega Mega. I’m not sure anyone who talked about the product saw the same core value, and there were several rewrites and shifts along the way.</li> </ul> <p>Advantage Mega Minimal again</p> <h2 id="what-about-entrants-into-a-crowded-field">What about entrants into a crowded field</h2> <p>Well that is harder and the MVP tends to be less minimal, because the baseline expectations are just much higher. I still lean towards Mega Minimal having a better chance at getting users, since there is a non zero chance the Mega Mega MVP will never get finished. I still think the exercise in focusing on core value that makes your product <em>not</em> a me too, and even considering how you can find a niche in a crowded field instead of just being “better”, and your MVP can be that niche differentiator.</p> <h2 id="internal-users">Internal users</h2> <p>Sometimes a good middle ground is considering getting lots of internal users if you’re really worried about bad experiences. This has it’s it’s definite downsides however, and you may not get diverse enough opinions. But it does give you some feedback while saving some face or bad experiences. I often think of the example of EC2 that was heavily used by Amazon, before being released to the world. That was a luxury Amazon had, where their customer base and their user base happened to be very similar, and they had bigger scale needs than any of their early customers, so the early internal feedback loop was a very strong signal.</p> <h2 id="summary">Summary</h2> <p>In the end however you want to approach MVPs is up to you, and if you find success with a meatier MVP than I have please don’t let me push you away from what works. But if you are having trouble shipping and are getting pushed all the time to add one more feature to that MVP before releasing it, consider stepping back and asking is this really core value for the product? Do you already have your core value? if so, consider just releasing it.</p> Surprise Go is ok for me now https://lostechies.com/ryansvihla/2018/12/13/surprise-go-is-ok/ Los Techies urn:uuid:53abf2a3-23f2-5855-0e2d-81148fb908bf Thu, 13 Dec 2018 20:23:00 +0000 I’m surprised to say this, I am ok using Go now. It’s not my style but I am able to build most anything I want to with it, and the tooling around it continues to improve. <p>I’m surprised to say this, I am ok using Go now. It’s not my style but I am able to build most anything I want to with it, and the tooling around it continues to improve.</p> <p>About 7 months ago I wrote about all the things I didn’t really care for in Go and now I either no longer am so bothered by it or things have improved.</p> <p>Go Modules so far is a huge improvement over Dep and Glide for dependency management. It’s easy to setup, performant and eliminates the GOPATH silliness. I haven’t tried it yet with some of the goofier libraries that gave me problems in the past (k8s api for example) so the jury is out on that, but again pretty impressed. I now longer have to check in vendor to speed up builds. Lesson use Go Modules.</p> <p>I pretty much stopped using channels for everything but shutdown signals and that fits my preferences pretty well, I use mutex and semaphores for my multithreaded code and feel no guilt about it. This cut out a lot of pain for me, and with the excellent race detector I feel really comfortable writing multi-threaded in Go now. Lesson, don’t use channels much.</p> <p>Lack of generics still sometimes sucks but I usually implement some crappy casting with dynamic types if I need that. I’ve sorta made my piece with just writing more code, and am no longer so hung up. Lesson relax.</p> <p>Error handling I’m still struggling with, I thought about using one of the error Wrap() libraries but an official one is in draft spec now, so I’ll wait on that. I now tend to have less nesting of functions as a result, this probably means longer functions than I like, but my code looks more “normal” now. This is a trade off I’m ok with. Lesson relax more.</p> <p>I see the main virtue of Go now that it is very popular in the infrastructure space where I am and so it’s becoming the common tongue (largely replacing Python for those sorts of tasks). For this, honestly it’s about right. It’s easy to rip out command line tools and deploy binaries for every platform with no runtime install.</p> <p>The community’s conservative attitude I sort of view as a feature now, in that there isn’t a bunch of different options that are popular and there is no arguing over what file format is used. This drove me up the wall initially, but I appreciate how much less time I spend on these things now.</p> <p>So now I suspect Go will be my “last” programming language. It’s not the one I would have chosen, but where I am at in my career, where most of my dev work is automation and tooling it fits the bill pretty well.</p> <p>Also equally important most of the people working with me didn’t have full time careers as developers or spend their time reading “Domain Driven Design” (amazing book) so adding in a bunched of nuanced stuff that maybe technically optimal but assumes the reader grasps all that nuance isn’t a good tradeoff for me.</p> <p>So I think I sorta get it now. I’ll never be a cheerleader for the language but it definitely solves my problems well enough.</p> Collaboration vs. Critique https://lostechies.com/derekgreer/2018/05/18/collaboration-vs-critique/ Los Techies urn:uuid:8a2d0bfb-9efe-2fd2-1e9b-6ba6d06055da Fri, 18 May 2018 17:00:00 +0000 While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique. <p>While there are certainly a number of apps developed by lone developers, it’s probably safe to say that the majority of professional software development occurs by teams. The people aspect of software development, more often than not, tends to be the most difficult part of software engineering. Unfortunately the software field isn’t quite like other engineering fields with well-established standards, guidelines, and apprenticeship programs. The nature of software development tends to follow an empirical process model rather than a defined process model. That is to say, software developers tend to be confronted with new problems every day and most of problems developers are solving aren’t something they’ve ever done in the exact same way with the exact same toolset. Moreover, there are often many different ways to solve the same problem, both with respect to the overall process as well as the implementation. This means that team members are often required to work together to determine how to proceed. Teams are often confronted with the need to explore multiple competing approaches as well as review one another’s designs and implementation. One thing I’ve learned during the course of my career is that the stage these types of interactions occur within the overall process has a significant impact on whether the interaction is generally viewed as collaboration or critique.</p> <p>To help illustrate what I’ve seen happen countless times both in catch-up design sessions and code reviews, consider the following two scenarios:</p> <h3 id="scenario-1">Scenario 1</h3> <p>Tom and Sally are both developers on a team maintaining a large-scale application. Tom takes the next task in the development queue which happens to have some complex processes that will need to be addressed. Being the good development team that they are, both Tom and Sally are aware of the requirements of the application (i.e. how the app needs to work from the user’s perspective), but they have deferred design-level discussions until the time of implementation. After Tom gets into the process a little, seeing that the problem is non-trivial, he pings Sally to help him brainstorm different approaches to solving the problem. Tom and Sally have been working together for over a year and have become accustomed to these sort of ad-hoc design sessions. As they begin discussing the problem, they each start tossing ideas out on the proverbial table resulting in multiple approaches to compare and contrast. The nature of the discussion is such that neither Tom nor Sally are embarrassed or offended when the other points out flaws in the design because there’s a sense of safety in their mutual understanding that this is a brainstorming session and that neither have thought in depth about the solutions being set froth yet. Tom throws out a couple of ideas, but ends up shooting them down himself as he uses Sally as a sounding board for the ideas. Sally does the same, but toward the end of the conversation suggests a slight alteration to one of Tom’s initial suggestions that they think may make it work after all. They end the session both with a sense that they’ve worked together to arrive at the best solution.</p> <h3 id="scenario-2">Scenario 2</h3> <p>Bill and Jake are developers on another team. They tend to work in a more siloed fashion, but they do rely upon one another for help from time to time and they are required to do code reviews prior to their code being merged into the main branch of development. Bill takes the next task in the development queue and spends the better part of an afternoon working out a solution with a basic working skeleton of the direction he’s going. The next day he decides that it might be good to have Jake take a look at the design to make him aware of the direction. Seeing where Bill’s design misses a few opportunities to make the implementation more adaptable to changes in the future, Jake points out where he would have done things differently. Bill acknowledges that Jake’s suggestions would be better and would have probably been just as easy to implement from the beginning, but inwardly he’s a bit disappointed that Jake didn’t like his design as-is and that he has to do some rework. In the end, Bill is left with a feeling of critique rather than collaboration.</p> <p>Whether it’s a high-level UML diagram or working code, how one person tends to perceive feedback on the ideas comprising a potential solution has everything to do with timing. It can be the exact same feedback they would have received either way, but when the feedback occurs often makes a difference between whether it’s perceived as collaboration or critique. It’s all about when the conversation happens.</p> Testing Button Click in React with Jest https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:a8e7d9fd-d718-a072-55aa-0736ac21bec4 Mon, 07 May 2018 17:01:59 +0000 When building React applications you will most likely find yourself using Jest as your testing framework.  Jest has some really, really cool features built in.  But when you use Enzyme you can take your testing to the nest level. One really cool feature is the ability to test click events via Enzyme to ensure your &#8230; <p><a href="https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Testing Button Click in React with&#160;Jest</span></a></p> <p>When building <a href="https://reactjs.org/" target="_blank" rel="noopener">React</a> applications you will most likely find yourself using <a href="https://facebook.github.io/jest" target="_blank" rel="noopener">Jest</a> as your testing framework.  Jest has some really, really cool features built in.  But when you use <a href="http://airbnb.io/enzyme/docs/guides/jest.html" target="_blank" rel="noopener">Enzyme</a> you can take your testing to the nest level.</p> <p>One really cool feature is the ability to test click events via Enzyme to ensure your code responds as expected.</p> <p>Before we get started you are going to want to make sure you have Jest and Enzyme installed in your application.</p> <ul> <li>Installing <a href="https://github.com/airbnb/enzyme/blob/master/docs/installation/README.md" target="_blank" rel="noopener">Enzyme</a></li> <li>Installing <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Jest</a></li> </ul> <p>Sample code under test</p> <p><img data-attachment-id="111" data-permalink="https://derikwhittaker.blog/2018/05/07/testing-button-click-in-react-with-jest/screen-shot-2018-05-07-at-12-52-56-pm/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640" data-orig-size="580,80" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Screen Shot 2018-05-07 at 12.52.56 PM" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640?w=580" class="alignnone size-full wp-image-111" src="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=640" alt="Screen Shot 2018-05-07 at 12.52.56 PM" srcset="https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png 580w, https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/05/screen-shot-2018-05-07-at-12-52-56-pm.png?w=300 300w" sizes="(max-width: 580px) 100vw, 580px" /></p> <p>What I would like to be able to do is pull the button out of my component and test the <code>onClick</code> event handler.</p> <div class="code-snippet"> <pre class="code-content"> // Make sure you have your imports setup correctly import React from 'react'; import { shallow } from 'enzyme'; it('When active link clicked, will push correct filter message', () =&gt; { let passedFilterType = ''; const handleOnTotalsFilter = (filterType) =&gt; { passedFilterType = filterType; }; const accounts = {}; const wrapper = shallow(&lt;MyComponent accounts={accounts} filterHeader="" onTotalsFilter={handleOnTotalsFilter} /&gt;); const button = wrapper.find('#archived-button'); button.simulate('click'); expect(passedFilterType).toBe(TotalsFilterType.archived); }); </pre> </div> <p>Lets take a look at the test above</p> <ol> <li>First we are going to create a callback (click handler) to catch the bubbled up values.</li> <li>We use Enzyme to create our component <code>MyComponent</code></li> <li>We use the .find() on our wrapped component to find our &lt;Button /&gt; by id</li> <li>After we get our button we can call .simulate(&#8216;click&#8217;) which will act as a user clicking the button.</li> <li>We can assert that the expected value bubbles up.</li> </ol> <p>As you can see, simulating a click event of a rendered component is very straight forward, yet very powerful.</p> <p>Till next time,</p> Lessons from a year of Golang https://lostechies.com/ryansvihla/2018/05/07/lessons-from-a-year-of-go/ Los Techies urn:uuid:e37d6484-2864-cc2a-034c-cac3d89dede7 Mon, 07 May 2018 13:16:00 +0000 I’m hoping to share in a non-negative way help others avoid the pitfalls I ran into with my most recent work building infrastructure software on top of a Kubernetes using Go, it sounded like an awesome job at first but I ran into a lot of problems getting productive. <p>I’m hoping to share in a non-negative way help others avoid the pitfalls I ran into with my most recent work building infrastructure software on top of a Kubernetes using Go, it sounded like an awesome job at first but I ran into a lot of problems getting productive.</p> <p>This isn’t meant to evaluate if you should pick up Go or tell you what you should think of it, this is strictly meant to help people out that are new to the language but experienced in Java, Python, Ruby, C#, etc and have read some basic Go getting started guide.</p> <h2 id="dependency-management">Dependency management</h2> <p>This is probably the feature most frequently talked about by newcomers to Go and with some justification, as dependency management been a rapidly shifting area that’s nothing like what experienced Java, C#, Ruby or Python developers are used to.</p> <p>I’ll cut to the chase the default tool now is <a href="https://github.com/golang/dep">Dep</a> all other tools I’ve used such as <a href="https://github.com/Masterminds/glide">Glide</a> or <a href="https://github.com/tools/godep">Godep</a> they’re now deprecated in favor of Dep, and while Dep has advanced rapidly there are some problems you’ll eventually run into (or I did):</p> <ol> <li>Dep hangs randomly and is slow, which is supposedly network traffic <a href="https://github.com/golang/dep/blob/c8be449181dadcb01c9118a7c7b592693c82776f/docs/failure-modes.md#hangs">but it happens to everyone I know with tons of bandwidth</a>. Regardless, I’d like an option to supply a timeout and report an error.</li> <li>Versions and transitive depency conflicts can be a real breaking issue in Go still. So without shading or it’s equivalent two package depending on different versions of a given package can break your build, there are a number or proposals to fix this but we’re not there yet.</li> <li>Dep has some goofy ways it resolves transitive dependencies and you may have to add explicit references to them in your Gopkg.toml file. You can see an example <a href="https://kubernetes.io/blog/2018/01/introducing-client-go-version-6/">here</a> under <strong>Updating dependencies – golang/dep</strong>.</li> </ol> <h3 id="my-advice">My advice</h3> <ul> <li>Avoid hangs by checking in your dependencies directly into your source repository and just using the dependency tool (dep, godep, glide it doesn’t matter) for downloading dependencies.</li> <li>Minimize transitive dependencies by keeping stuff small and using patterns like microservices when your dependency tree conflicts.</li> </ul> <h2 id="gopath">GOPATH</h2> <p>Something that takes some adjustment is you check out all your source code in one directory with one path (by default ~/go/src ) and include the path to the source tree to where you check out. Example:</p> <ol> <li>I want to use a package I found on github called jim/awesomeness</li> <li>I have to go to ~/go/src and mkdir -p github.com/jim</li> <li>cd into that and clone the package.</li> <li>When I reference the package in my source file it’ll be literally importing github.com/jim/awesomeness</li> </ol> <p>A better guide to GOPATH and packages is <a href="https://thenewstack.io/understanding-golang-packages/">here</a>.</p> <h3 id="my-advice-1">My advice</h3> <p>Don’t fight it, it’s actually not so bad once you embrace it.</p> <h2 id="code-structure">Code structure</h2> <p>This is a hot topic and there are a few standards for the right way to structure you code from projects that do “file per class” to giant files with general concept names (think like types.go and net.go). Also if you’re used to using a lot of sub package you’re gonna to have issues with not being able to compile if for example you have two sub packages reference one another.</p> <h3 id="my-advice-2">My Advice</h3> <p>In the end I was reasonably ok with something like the following:</p> <ul> <li>myproject/bin for generated executables</li> <li>myproject/cmd for command line code</li> <li>myproject/pkg for code related to the package</li> </ul> <p>Now whatever you do is fine, this was just a common idiom I saw, but it wasn’t remotely all projects. I also had some luck with just jamming everything into the top level of the package and keeping packages small (and making new packages for common code that is used in several places in the code base). If I ever return to using Go for any reason I will probably just jam everything into the top level directory.</p> <h2 id="debugging">Debugging</h2> <p>No debugger! There are some projects attempting to add one but Rob Pike finds them a crutch.</p> <h3 id="my-advice-3">My Advice</h3> <p>Lots of unit tests and print statements.</p> <h2 id="no-generics">No generics</h2> <p>Sorta self explanatory and it causes you a lot of pain when you’re used to reaching for these.</p> <h3 id="my-advice-4">My advice</h3> <p>Look at the code generation support which uses pragmas, this is not exactly the same as having generics but if you have some code that has a lot of boiler plate without them this is a valid alternative. See this official <a href="https://blog.golang.org/generate">Go Blog post</a> for more details.</p> <p>If you don’t want to use generation you really only have reflection left as a valid tool, which comes with all of it’s lack of speed and type safety.</p> <h2 id="cross-compiling">Cross compiling</h2> <p>If you have certain features or dependencies you may find you cannot take advantage of one of Go’s better features cross compilation.</p> <p>I ran into this when using the confluent-go/kafka library which depends on the C librdkafka library. It basically meant I had to do all my development in a Linux VM because almost all our packages relied on this.</p> <h3 id="my-advice-5">My Advice</h3> <p>Avoid C dependencies at all costs.</p> <h2 id="error-handling">Error handling</h2> <p>Go error handling is not exception base but return based, and it’s got a lot of common idioms around it:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>myValue, err := doThing() if err != nil { return -1, fmt.Errorf(“unable to doThing %v”, err) } </code></pre></div></div> <p>Needless to say this can get very wordy when dealing with deeply nested exceptions or when you’re interacting a lot with external systems. It is definitely a mind shift if you’re used to the throwing exceptions wherever and have one single place to catch all exceptions where they’re handled appropriately.</p> <h3 id="my-advice-6">My Advice</h3> <p>I’ll be honest I never totally made my peace with this. I had good training from experienced opensource contributors to major Go projects, read all the right blog posts, definitely felt like I’d heard enough from the community on why the current state of Go error handling was great in their opinions, but the lack of stack traces was a deal breaker for me.</p> <p>On the positive side, I found Dave Cheney’s advice on error handling to be the most practical and he wrote <a href="https://github.com/pkg/errors">a package</a> containing a lot of that advice, we found it invaluable as it provided those stack traces we all missed but you had to remember to use it.</p> <h2 id="summary">Summary</h2> <p>A lot of people really love Go and are very productive with it, I just was never one of those people and that’s ok. However, I think the advice in this post is reasonably sound and uncontroversial. So, if you find yourself needing to write some code it Go, give this guide a quick perusal and you’ll waste a lot less time than I did getting productive in developing applications in Go.</p> Raspberry Pi Kubernetes Cluster - Part 2 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2/ Jason Meridth urn:uuid:0aef121f-48bd-476f-e09d-4ca0aa2ac602 Thu, 03 May 2018 02:13:07 +0000 <p>Howdy again.</p> <p>Alright, my 8 port switch showed up so I was able to connect my raspberry 3B+ boards to my home network. I plugged it in with 6 1ft CAT5 cables I had in my catch-all box that all of us nerds have. I’d highly suggest flexible CAT 6 cables instead if you can get them, like <a href="https://www.amazon.com/Cat-Ethernet-Cable-Black-Connectors/dp/B01IQWGKQ6">here</a>. I ordered them and they showed up before I finished this post, so I am using the CAT6 cables.</p> <!--more--> <p>The IP addresses they will receive initialy from my home router via DHCP can be determined with nmap. Lets imagine my subnet is 192.168.1.0/24.</p> <p>Once I got them on the network I did the following:</p> <script src="https://gist.github.com/64e7b08729ffe779f77a7bda0221c6e9.js"> </script> <h3 id="install-raspberrian-os-on-sd-cards">Install Raspberrian OS On SD Cards</h3> <p>You can get the Raspberry Pi Stretch Lite OS from <a href="https://www.raspberrypi.org/downloads/raspbian/">here</a>.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/raspberry_pi_stretch_lite.png" alt="Raspberry Pi Stretch Lite" /></p> <p>Then use the <a href="https://etcher.io/">Etcher</a> tool to install it to each of the 6 SD cards.</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/etcher.gif" alt="Etcher" /></p> <h4 id="important">IMPORTANT</h4> <p>Before putting the cards into the Raspberry Pis you need to add a <code class="highlighter-rouge">ssh</code> folder to the root of the SD cards. This will allow you to ssh to each Raspberry Pi with default credentials (username: <code class="highlighter-rouge">pi</code> and password <code class="highlighter-rouge">raspberry</code>). Example: <code class="highlighter-rouge">ssh pi@raspberry_pi_ip</code> where <code class="highlighter-rouge">raspberry_ip</code> is obtained from the nmap command above.</p> <p>Next post will be setting up kubernetes. Thank you for reading.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-2/">Raspberry Pi Kubernetes Cluster - Part 2</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on May 02, 2018.</p> Multi-Environment Deployments with React https://derikwhittaker.blog/2018/04/10/multi-environment-deployments-with-react/ Maintainer of Code, pusher of bits… urn:uuid:4c0ae985-09ac-6d2e-0429-addea1632ea3 Tue, 10 Apr 2018 12:54:17 +0000 If you are using Create-React-App to scaffold your react application there is built in support for changing environment variables based on the NODE_ENV values, this is done by using .env files.  In short this process works by having a .env, .env.production, .env.development set of files.  When you run/build your application CRA will set the NODE_ENV value &#8230; <p><a href="https://derikwhittaker.blog/2018/04/10/multi-environment-deployments-with-react/" class="more-link">Continue reading <span class="screen-reader-text">Multi-Environment Deployments with&#160;React</span></a></p> <p>If you are using <a href="https://github.com/facebook/create-react-app" target="_blank" rel="noopener">Create-React-App</a> to scaffold your react application there is <a href="https://github.com/facebook/create-react-app/blob/master/packages/react-scripts/template/README.md#adding-development-environment-variables-in-env" target="_blank" rel="noopener">built in support</a> for changing environment variables based on the NODE_ENV values, this is done by using .env files.  In short this process works by having a .env, .env.production, .env.development set of files.  When you run/build your application <a href="https://github.com/facebook/create-react-app" target="_blank" rel="noopener">CRA</a> will set the NODE_ENV value to either <code>development</code> or <code>production</code> and based on these values the correct .env file will be used.</p> <p>This works great, when you have a simple deploy setup. But many times in enterprise level applications you need support for more than just 2 environments, many times it is 3-4 environments.  Common logic would suggest that you can accomplish this via the built in mechanism by having additional .env files and changing the NODE_ENV value to the value you care about.  However, CRA does not support this with doing an <code>eject</code>, which will eject all the default conventions and leave it to you to configure your React application.  Maybe this is a good idea, but in my case ejecting was not something I wanted to do.</p> <p>Because I did not want to do an <code>eject</code> I needed to find another solution, and after a fair amount of searching I found a solution that seems to work for me and my needs and is about the amount of effort as I wanted <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" alt=" Raspberry Pi Kubernetes Cluster - Part 1 https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1/ Jason Meridth urn:uuid:bd3470f6-97d5-5028-cf12-0751f90915c3 Sat, 07 Apr 2018 14:01:00 +0000 <p>Howdy</p> <p>This is going to be the first post about my setup of a Raspberry Pi Kubernetes Cluster. I saw a post by <a href="https://harthoover.com/kubernetes-1.9-on-a-raspberry-pi-cluster/">Hart Hoover</a> and it finally motivated me to purchase his “grocery list” and do this finally. I’ve been using <a href="https://kubernetes.io/docs/getting-started-guides/minikube/">Minikube</a> for local Kubernetes testing but it doesn’t give you multi-host testing abilities. I’ve also been wanting to get deeper into my Raspberry Pi knowledge. Lots of learning and winning.</p> <p>The items I bought were:</p> <ul> <li>Six <a href="https://smile.amazon.com/dp/B07BFH96M3">Raspberry Pi 3 Model B+ Motherboards</a></li> <li>Six <a href="https://smile.amazon.com/gp/product/B010Q57T02/">SanDisk Ultra 32GB microSDHC UHS-I Card with Adapter, Grey/Red, Standard Packaging (SDSQUNC-032G-GN6MA)</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B011KLFERG/ref=oh_aui_detailpage_o02_s01?ie=UTF8&amp;psc=1">Sabrent 6-Pack 22AWG Premium 3ft Micro USB Cables High Speed USB 2.0 A Male to Micro B Sync and Charge Cables Black CB-UM63</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B01L0KN8OS/ref=oh_aui_detailpage_o02_s01?ie=UTF8&amp;psc=1">AmazonBasics 6-Port USB Wall Charger (60-Watt) - Black</a></li> <li>One <a href="https://smile.amazon.com/gp/product/B01D9130QC/ref=oh_aui_detailpage_o02_s00?ie=UTF8&amp;psc=1">GeauxRobot Raspberry Pi 3 Model B 6-layer Dog Bone Stack Clear Case Box Enclosure also for Pi 2B B+ A+ B A</a></li> <li>One <a href="http://amzn.to/2gNzLzi">Black Box 8-Port Switch</a></li> </ul> <p>Here is the tweet when it all arrived:</p> <div class="jekyll-twitter-plugin"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">I blame <a href="https://twitter.com/hhoover?ref_src=twsrc%5Etfw">@hhoover</a> ;). I will be building my <a href="https://twitter.com/kubernetesio?ref_src=twsrc%5Etfw">@kubernetesio</a> cluster once the 6pi case shows up next Wednesday. The extra pi is to upgrade my <a href="https://twitter.com/RetroPieProject?ref_src=twsrc%5Etfw">@RetroPieProject</a>. Touch screen is an addition I want to try. Side project here I come. <a href="https://t.co/EebIKbsCeH">pic.twitter.com/EebIKbsCeH</a></p>&mdash; Jason Meridth (@jmeridth) <a href="https://twitter.com/jmeridth/status/980075584725422080?ref_src=twsrc%5Etfw">March 31, 2018</a></blockquote> <script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script> </div> <p>I spent this morning finally putting it together.</p> <p>Here is me getting started on the “dogbone case” to hold all of the Raspberry Pis:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_2.jpg" alt="The layout" /></p> <p>The bottom and one layer above:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_3.jpg" alt="The bottom and one layer above" /></p> <p>And the rest:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_4.jpg" alt="3 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_11.jpg" alt="4 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_12.jpg" alt="5 Layers" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_13.jpg" alt="6 Layers and Finished" /></p> <p>Different angles completed:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_14.jpg" alt="Finished Angle 2" /></p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_15.jpg" alt="Finished Angle 3" /></p> <p>And connect the power:</p> <p><img src="https://blog.jasonmeridth.com/images/kubernetes_cluster/case_16.jpg" alt="Power" /></p> <p>Next post will be on getting the 6 sandisk cards ready and putting them in and watching the Raspberry Pis boot up and get a green light. Stay tuned.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/raspberry-pi-kubernetes-cluster-part-1/">Raspberry Pi Kubernetes Cluster - Part 1</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on April 07, 2018.</p> Building AWS Infrastructure with Terraform: S3 Bucket Creation https://derikwhittaker.blog/2018/04/06/building-aws-infrastructure-with-terraform-s3-bucket-creation/ Maintainer of Code, pusher of bits… urn:uuid:cb649524-d882-220f-c253-406a54762705 Fri, 06 Apr 2018 14:28:49 +0000 If you are going to be working with any cloud provider it is highly suggested that you script out the creation/maintenance of your infrastructure.  In the AWS word you can use the native CloudFormation solution, but honestly I find this painful and the docs very lacking.  Personally, I prefer Terraform by Hashicorp.  In my experience &#8230; <p><a href="https://derikwhittaker.blog/2018/04/06/building-aws-infrastructure-with-terraform-s3-bucket-creation/" class="more-link">Continue reading <span class="screen-reader-text">Building AWS Infrastructure with Terraform: S3 Bucket&#160;Creation</span></a></p> <p>If you are going to be working with any cloud provider it is highly suggested that you script out the creation/maintenance of your infrastructure.  In the AWS word you can use the native <a href="https://www.googleadservices.com/pagead/aclk?sa=L&amp;ai=DChcSEwjD-Lry6KXaAhUMuMAKHTB8AYwYABAAGgJpbQ&amp;ohost=www.google.com&amp;cid=CAESQeD2aF3IUBPQj5YF9K0xmz0FNtIhnq3PzYAHFV6dMZVIirR_psuXDSgkzxZ0jXoyWfpECufNNfbp7JzHQ73TTrQH&amp;sig=AOD64_1b_L781SLpKXqLTFFYIk5Zv3BcHA&amp;q=&amp;ved=0ahUKEwi1l7Hy6KXaAhWD24MKHQXSCQ0Q0QwIJw&amp;adurl=" target="_blank" rel="noopener">CloudFormation</a> solution, but honestly I find this painful and the docs very lacking.  Personally, I prefer <a href="https://www.terraform.io/" target="_blank" rel="noopener">Terraform</a> by <a href="https://www.hashicorp.com/" target="_blank" rel="noopener">Hashicorp</a>.  In my experience the simplicity and easy of use, not to mention the stellar documentation make this the product of choice.</p> <p>This is the initial post in what I hope to be a series of post about how to use Terraform to setup/build AWS Infrastructure.</p> <p>Terrform Documentation on S3 Creation -&gt; <a href="https://www.terraform.io/docs/providers/aws/d/s3_bucket.html" target="_blank" rel="noopener">Here</a></p> <p>In this post I will cover 2 things</p> <ol> <li>Basic bucket setup</li> <li>Bucket setup as Static website</li> </ol> <p>Setting up a basic bucket we can use the following</p> <div class="code-snippet"> <pre class="code-content">resource "aws_s3_bucket" "my-bucket" { bucket = "my-bucket" acl = "private" tags { Any_Tag_Name = "Tag value for tracking" } } </pre> </div> <p>When looking at the example above the only 2 values that are required are bucket and acl.</p> <p>I have added the use of Tags to show you can add custom tags to your bucket</p> <p>Another way to setup an S3 bucket is to act as a Static Web Host.   Setting this up takes a bit more configuration, but not a ton.</p> <div class="code-snippet"> <pre class="code-content">resource "aws_s3_bucket" "my-website-bucket" { bucket = "my-website-bucket" acl = "public-read" website { index_document = "index.html" error_document = "index.html" } policy = &lt;&lt;POLICY { "Version": "2012-10-17", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-website-bucket/*" } ] } POLICY tags { Any_Tag_Name = "Tag value for tracking" } } </pre> </div> <p>The example above has 2 things that need to be pointed out.</p> <ol> <li>The website settings.  Make sure you setup the correct pages here for index/error</li> </ol> <p>The Policy settings.  Here I am using just basic policy.  You can of course setup any policy here you want/need.</p> <p>As you can see, setting up S3 buckets is very simple and straight forward.</p> <p><strong><em>*** Reminder: S3 bucket names MUST be globally unique ***</em></strong></p> <p>Till next time,</p> SSH - Too Many Authentication Failures https://blog.jasonmeridth.com/posts/ssh-too-many-authentication-failures/ Jason Meridth urn:uuid:d7fc1034-1798-d75e-1d61-84fac635dda4 Wed, 28 Mar 2018 05:00:00 +0000 <h1 id="problem">Problem</h1> <p>I started seeing this error recently and had brain farted on why.</p> <figure class="highlight"><pre><code class="language-bash" data-lang="bash">Received disconnect from 123.123.132.132: Too many authentication failures <span class="k">for </span>hostname</code></pre></figure> <p>After a bit of googling it came back to me. This is because I’ve loaded too many keys into my ssh-agent locally (<code class="highlighter-rouge">ssh-add</code>). Why did you do that? Well, because it is easier than specifying the <code class="highlighter-rouge">IdentityFile</code> on the cli when trying to connect. But there is a threshhold. This is set by the ssh host by the <code class="highlighter-rouge">MaxAuthTries</code> setting in <code class="highlighter-rouge">/etc/ssh/sshd_config</code>. The default is 6.</p> <h1 id="solution-1">Solution 1</h1> <p>Clean up the keys in your ssh-agent.</p> <p><code class="highlighter-rouge">ssh-add -l</code> lists all the keys you have in your ssh-agent <code class="highlighter-rouge">ssh-add -d key</code> deletes the key from your ssh-agent</p> <h1 id="solution-2">Solution 2</h1> <p>You can solve this on the command line like this:</p> <p><code class="highlighter-rouge">ssh -o IdentitiesOnly=yes -i ~/.ssh/example_rsa foo.example.com</code></p> <p>What is IdentitiesOnly? Explained in Solution 3 below.</p> <h1 id="solution-3-best">Solution 3 (best)</h1> <p>Specifiy, explicitly, which key goes to which host(s) in your <code class="highlighter-rouge">.ssh/config</code> file.</p> <p>You need to configure which key (“IdentityFile”) goes with which domain (or host). You also want to handle the case when the specified key doesn’t work, which would usually be because the public key isn’t in ~/.ssh/authorized_keys on the server. The default is for SSH to then try any other keys it has access to, which takes us back to too many attempts. Setting “IdentitiesOnly” to “yes” tells SSH to only try the specified key and, if that fails, fall through to password authentication (presuming the server allows it).</p> <p>Your ~/.ssh/config would look like:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host *.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/myhost Host secure.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/mysecurehost_rsa Host *.myotherhost.domain IdentitiesOnly yes IdentityFile ~/.ssh/myotherhost_rsa </code></pre></div></div> <p><code class="highlighter-rouge">Host</code> is the host the key can connect to <code class="highlighter-rouge">IdentitiesOnly</code> means only to try <em>this</em> specific key to connect, no others <code class="highlighter-rouge">IdentityFile</code> is the path to the key</p> <p>You can try multiple keys if needed</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host *.myhost.com IdentitiesOnly yes IdentityFile ~/.ssh/myhost_rsa IdentityFile ~/.ssh/myhost_dsa </code></pre></div></div> <p>Hope this helps someone else.</p> <p>Cheers!</p> <p><a href="https://blog.jasonmeridth.com/posts/ssh-too-many-authentication-failures/">SSH - Too Many Authentication Failures</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 28, 2018.</p> Clear DNS Cache In Chrome https://blog.jasonmeridth.com/posts/clear-dns-cache-in-chrome/ Jason Meridth urn:uuid:6a2c8c0b-c91b-5f7d-dbc7-8065f0a2f1fd Tue, 27 Mar 2018 20:42:00 +0000 <p>I’m blogging this because I keep forgetting how to do it. Yeah, yeah, I can google it. I run this blog so I know it is always available…..anywho.</p> <p>Go To:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chrome://net-internals/#dns </code></pre></div></div> <p>Click “Clear host cache” button</p> <p><img src="https://blog.jasonmeridth.com/images/clear_dns_cache_in_chrome.png" alt="clear_dns_cache_in_chrome" /></p> <p>Hope this helps someone else.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/clear-dns-cache-in-chrome/">Clear DNS Cache In Chrome</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 27, 2018.</p> Create Docker Container from Errored Container https://blog.jasonmeridth.com/posts/create-docker-container-from-errored-container/ Jason Meridth urn:uuid:33d5a6b5-4c48-ae06-deb6-a505edc6b427 Mon, 26 Mar 2018 03:31:00 +0000 <p>When I’m trying to “dockerize” an applciation I usually have to work through some wonkiness.</p> <p>To diagnose a container that has errored out, I, obviously, look at the logs via <code class="highlighter-rouge">docker logs -f [container_name]</code>. That is sometimes helpful. It will, at minimum tell me where I need to focus on the new container I’m going to create.</p> <p><img src="https://blog.jasonmeridth.com/images/diagnose.jpg" alt="diagnose" /></p> <p>Pre-requisites to being able to build a diagnosis container:</p> <ul> <li>You need to use <code class="highlighter-rouge">CMD</code>, <em>not</em> <code class="highlighter-rouge">ENTRYPOINT</code> in the Dockerfile <ul> <li>with <code class="highlighter-rouge">CMD</code> you’ll be able to start a shell, with <code class="highlighter-rouge">ENTRYPOINT</code> your diagnosis container will just keep trying to run that</li> </ul> </li> </ul> <p>To create a diagnosis container, do the following:</p> <ul> <li>Check your failed container ID by <code class="highlighter-rouge">docker ps -a</code></li> <li>Create docker image form the container with <code class="highlighter-rouge">docker commit</code> <ul> <li>example: <code class="highlighter-rouge">docker commit -m "diagnosis" [failed container id]</code></li> </ul> </li> <li>Check the newly create docker image ID by <code class="highlighter-rouge">docker images</code></li> <li><code class="highlighter-rouge">docker run -it [new container image id] sh</code> <ul> <li>this takes you into a container immediately after the error occurred.</li> </ul> </li> </ul> <p>Hope this helps someone else.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/create-docker-container-from-errored-container/">Create Docker Container from Errored Container</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on March 25, 2018.</p> Log Early, Log Often… Saved my butt today https://derikwhittaker.blog/2018/03/21/log-early-log-often-saved-my-butt-today/ Maintainer of Code, pusher of bits… urn:uuid:395d9800-e7ce-27fd-3fc1-5e68628bc161 Wed, 21 Mar 2018 13:16:03 +0000 In a prior posting (AWS Lambda:Log Early Log often, Log EVERYTHING) I wrote about the virtues and value about having really in depth logging, especially when working with cloud services.  Well today this logging saved my ASS a ton of detective work. Little Background I have a background job (Lambda that is called on a schedule) &#8230; <p><a href="https://derikwhittaker.blog/2018/03/21/log-early-log-often-saved-my-butt-today/" class="more-link">Continue reading <span class="screen-reader-text">Log Early, Log Often&#8230; Saved my butt&#160;today</span></a></p> <p>In a prior <a href="https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/" target="_blank" rel="noopener">posting (AWS Lambda:Log Early Log often, Log EVERYTHING)</a> I wrote about the virtues and value about having really in depth logging, especially when working with cloud services.  Well today this logging saved my ASS a ton of detective work.</p> <p><strong>Little Background</strong><br /> I have a background job (Lambda that is called on a schedule) to create/update data cache in a <a href="https://aws.amazon.com/dynamodb/" target="_blank" rel="noopener">DynamoDB</a> table.  Basically this job will pull data from one data source and attempt to push it as create/update/delete to our Dynamo table.</p> <p>Today when I was running our application I noticed things were not loading right, in fact I had javascript errors because of null reference errors.  I knew that the issue had to be in our data, but was not sure what was wrong.  If I had not had a ton of logging (debug and info) I would have had to run our code locally and step though/debug code for hundreds of items of data.</p> <p>However, because of in depth logging I was able to quickly go to <a href="https://aws.amazon.com/cloudwatch/" target="_blank" rel="noopener">CloudWatch</a> and filter on a few key words and narrow hundreds/thousands of log entries down to 5.  Once I had these 5 entries I was able to expand a few of those entries and found the error within seconds.</p> <p>Total time to find the error was less than 5 minutes and I never opened a code editor or stepped into code.</p> <p>The moral of this story, because I log everything, including data (no PII of course) I was able to quickly find the source of the error.  Now to fix the code&#8230;.</p> <p>Till next time,</p> AWS Lambda: Log early, Log often, Log EVERYTHING https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/ Maintainer of Code, pusher of bits… urn:uuid:6ee7f59b-7f4c-1312-bfff-3f9c46ec8701 Tue, 06 Mar 2018 14:00:58 +0000 In the world of building client/server applications logs are important.  They are helpful when trying to see what is going on in your application.  I have always held the belief  that your logs need to be detailed enough to allow you to determine the WHAT and WHERE without even looking at the code. But lets &#8230; <p><a href="https://derikwhittaker.blog/2018/03/06/aws-lambda-log-early-log-often-log-everything/" class="more-link">Continue reading <span class="screen-reader-text">AWS Lambda: Log early, Log often, Log&#160;EVERYTHING</span></a></p> <p>In the world of building client/server applications logs are important.  They are helpful when trying to see what is going on in your application.  I have always held the belief  that your logs need to be detailed enough to allow you to determine the WHAT and WHERE without even looking at the code.</p> <p>But lets be honest, in most cases when building client/server applications logs are an afterthought.  Often this is because you can pretty easily (in most cases) debug your application and step through the code.</p> <p>When building a <a href="https://aws.amazon.com/serverless/" target="_blank" rel="noopener">serverless</a> applications with technologies like <a href="https://aws.amazon.com/lambda/" target="_blank" rel="noopener">AWS Lambda</a> functions (holds true for Azure Functions as well) your logging game really needs to step up.</p> <p>The reason for this is that you cannot really debug your Lambda in the wild (you can to some degree locally with AWS SAM or the Serverless framework).  Because of this you need produce detailed enough logs to allow you to easily determine the WHAT and WHERE.</p> <p>When I build my serverless functions I have a few guidelines I follow</p> <ol> <li>Info Log calls to methods, output argument data (make sure no <a href="https://en.wikipedia.org/wiki/Personally_identifiable_information" target="_blank" rel="noopener">PII</a>/<a href="https://en.wikipedia.org/wiki/Protected_health_information" target="_blank" rel="noopener">PHI</a>)</li> <li>Error Log any failures (in try/catch or .catch for promises)</li> <li>Debug Log any critical decision points</li> <li>Info Log exit calls at top level methods</li> </ol> <p>I also like to setup a simple and consistent format for my logs.  The example I follow for my Lambda logs is as seen below</p> <div class="code-snippet"> <pre class="code-content">timestamp: [logLevel] : [Class.Method] - message {data points} </pre> </div> <p>I have found that if I follow these general guidelines the pain of determine failure points in serverless environments is heavily reduced.</p> <p>Till next time,</p> Sinon Error: Attempted to wrap undefined property ‘XYZ as function https://derikwhittaker.blog/2018/02/27/sinon-error-attempted-to-wrap-undefined-property-xyz-as-function/ Maintainer of Code, pusher of bits… urn:uuid:b41dbd54-3804-6f6d-23dc-d2a04635033a Tue, 27 Feb 2018 13:45:29 +0000 I ran into a fun little error recently when working on a ReactJs application.  In my application I was using SinonJs to setup some spies on a method, I wanted to capture the input arguments for verification.  However, when I ran my test I received the following error. Attempted to wrap undefined property handlOnAccountFilter as &#8230; <p><a href="https://derikwhittaker.blog/2018/02/27/sinon-error-attempted-to-wrap-undefined-property-xyz-as-function/" class="more-link">Continue reading <span class="screen-reader-text">Sinon Error: Attempted to wrap undefined property &#8216;XYZ as&#160;function</span></a></p> <p>I ran into a fun little error recently when working on a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> application.  In my application I was using <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a> to setup some spies on a method, I wanted to capture the input arguments for verification.  However, when I ran my test I received the following error.</p> <blockquote><p>Attempted to wrap undefined property handlOnAccountFilter as function</p></blockquote> <p>My method under test is setup as such</p> <div class="code-snippet"> <pre class="code-content">handleOnAccountFilter = (filterModel) =&gt; { // logic here } </pre> </div> <p>I was using the above syntax is the <a href="https://github.com/jeffmo/es-class-public-fields" target="_blank" rel="noopener">proposed class property</a> feature, which will automatically bind the <code>this</code> context of the class to my method.</p> <p>My sinon spy is setup as such</p> <div class="code-snippet"> <pre class="code-content">let handleOnAccountFilterSpy = null; beforeEach(() =&gt; { handleOnAccountFilterSpy = sinon.spy(AccountsListingPage.prototype, 'handleOnAccountFilter'); }); afterEach(() =&gt; { handleOnAccountFilterSpy.restore(); }); </pre> </div> <p>Everything looked right, but I was still getting this error.  It turns out that this error is due in part in the way that the Class Property feature implements the handlOnAccountFilter.  When you use this feature the method/property is added to the class as an instance method/property, not as a prototype method/property.  This means that sinon is not able to gain access to it prior to creating an instance of the class.</p> <p>To solve my issue I had to make a change in the implementation to the following</p> <div class="code-snippet"> <pre class="code-content">handleOnAccountFilter(filterModel) { // logic here } </pre> </div> <p>After make the above change I needed to determine how I wanted to bind <code>this</code> to my method (Cory show 5 ways to do this <a href="https://medium.freecodecamp.org/react-binding-patterns-5-approaches-for-handling-this-92c651b5af56" target="_blank" rel="noopener">here</a>).  I chose to bind <code>this</code> inside the constructor as below</p> <div class="code-snippet"> <pre class="code-content">constructor(props){ super(props); this.handleOnAccountFilter = this.handleOnAccountFilter.bind(this); } </pre> </div> <p>I am not a huge fan of having to do this (pun intended), but oh well.  This solved my issues.</p> <p>Till next time</p> Ensuring componentDidMount is not called in Unit Tests https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/ Maintainer of Code, pusher of bits… urn:uuid:da94c1a3-2de4-a90c-97f5-d7361397a33c Thu, 22 Feb 2018 19:45:53 +0000 If you are building a ReactJs you will often times implement componentDidMount on your components.  This is very handy at runtime, but can pose an issue for unit tests. If you are building tests for your React app you are very likely using enzyme to create instances of your component.  The issue is that when enzyme creates &#8230; <p><a href="https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/" class="more-link">Continue reading <span class="screen-reader-text">Ensuring componentDidMount is not called in Unit&#160;Tests</span></a></p> <p>If you are building a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> you will often times implement <code>componentDidMount</code> on your components.  This is very handy at runtime, but can pose an issue for unit tests.</p> <p>If you are building tests for your React app you are very likely using <a href="http://airbnb.io/projects/enzyme/" target="_blank" rel="noopener">enzyme</a> to create instances of your component.  The issue is that when enzyme creates the component it invokes the lifecyle methods, like <code>componentDidMount</code>.  Sometimes we do not want this to be called, but how to suppress this?</p> <p>I have found 2 different ways to suppress/mock <code>componentDidMount</code>.</p> <p>Method one is to redefine <code>componentDidMount</code> on your component for your tests.  This could have interesting side effects so use with caution.</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { beforeAll(() =&gt; { YourComponent.prototype.componentDidMount = () =&gt; { // can omit or add custom logic }; }); }); </pre> </div> <p>Basically above I am just redefining the componentDidMount method on my component.  This works and allows you to have custom logic.  Be aware that when doing above you will have changed the implementation for your component for the lifetime of your test session.</p> <p>Another solution is to use a mocking framework like <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a>.  With Sinon you can stub out the <code>componentDidMount</code> implementation as seen below</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { let componentDidMountStub = null; beforeAll(() =&gt; { componentDidMountStub = sinon.stub(YourComponent.prototype, 'componentDidMount').callsFake(function() { // can omit or add custom logic }); }); afterAll(() =&gt; { componentDidMountStub.restore(); }); }); </pre> </div> <p>Above I am using .stub to redefine the method.  I also added .<a href="http://sinonjs.org/releases/v4.3.0/stubs/" target="_blank" rel="noopener">callsFake</a>() but this can be omitted if you just want to ignore the call.  You will want to make sure you restore your stub via the afterAll, otherwise you will have stubbed out the call for the lifetime of your test session.</p> <p>Till next time,</p> Los Techies Welcomes Derik Whittaker https://lostechies.com/derekgreer/2018/02/21/los-techies-welcomes-derik-whittaker/ Los Techies urn:uuid:adc9a1c8-48ea-3bea-1aa7-320d51db12a1 Wed, 21 Feb 2018 11:00:00 +0000 Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at CodeBetter.com. Welcome, Derik! <p>Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at <a href="http://codebetter.com/">CodeBetter.com</a>. Welcome, Derik!</p> Ditch the Repository Pattern Already https://lostechies.com/derekgreer/2018/02/20/ditch-the-repository-pattern-already/ Los Techies urn:uuid:7fab2063-d833-60ce-9e46-e4a413ec8391 Tue, 20 Feb 2018 21:00:00 +0000 One pattern that still seems particularly common among .Net developers is the Repository pattern. I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago. <p>One pattern that still seems particularly common among .Net developers is the <a href="https://martinfowler.com/eaaCatalog/repository.html">Repository pattern.</a> I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago.</p> <p>I had read several articles over the years advocating abandoning the Repository pattern in favor of other suggested approaches which served as a pebble in my shoe for a few years, but there were a few design principles whose application seemed to keep motivating me to use the pattern.  It wasn’t until a change of tooling and a shift in thinking about how these principles should be applied that I finally felt comfortable ditching the use of repositories, so I thought I’d recount my journey to provide some food for thought for those who still feel compelled to use the pattern.</p> <h2 id="mental-obstacle-1-testing-isolation">Mental Obstacle 1: Testing Isolation</h2> <p>What I remember being the biggest barrier to moving away from the use of repositories was writing tests for components which interacted with the database.  About a year or so before I actually abandoned use of the pattern, I remember trying to stub out a class derived from Entity Framework’s DbContext after reading an anti-repository blog post.  I don’t remember the details now, but I remember it being painful and even exploring use of a 3rd-party library designed to help write tests for components dependent upon Entity Framework.  I gave up after a while, concluding it just wasn’t worth the effort.  It wasn’t as if my previous approach was pain-free, as at that point I was accustomed to stubbing out particularly complex repository method calls, but as with many things we often don’t notice friction to which we’ve become accustomed for one reason or another.  I had assumed that doing all that work to stub out my repositories was what I should be doing.</p> <p>Another principle that I picked up from somewhere (maybe the big <a href="http://xunitpatterns.com/">xUnit Test Patterns</a> book? … I don’t remember) that seemed to keep me bound to my repositories was that <a href="http://aspiringcraftsman.com/2012/04/01/tdd-best-practices-dont-mock-others/">you shouldn’t write tests that depend upon dependencies you don’t own</a>.  I believed at the time that I should be writing tests for Application Layer services (which later morphed into discrete dispatched command handlers) and the idea of stubbing out either NHIbernate or Entity Framework violated my sensibilities.</p> <h2 id="mental-obstacle-2-the-dependency-inversion-principle-adherence">Mental Obstacle 2: The Dependency Inversion Principle Adherence</h2> <p>The Dependency Inversion Principle seems to be a source of confusion for many which stems in part from the similarity of wording with the practice of <a href="https://lostechies.com/derickbailey/2011/09/22/dependency-injection-is-not-the-same-as-the-dependency-inversion-principle/">Dependency Injection</a> as well as from the fact that the pattern’s formal definition reflects the platform from whence the principle was conceived (i.e. C++).  One might say that the abstract definition of the Dependency Inversion Principle was too dependent upon the details of its origin (ba dum tss).  I’ve written about the principle a few times (perhaps my most succinct being <a href="https://stackoverflow.com/a/1113937/1219618">this Stack Overflow answer</a>), but put simply, the Dependency Inversion Principle has at its primary goal the decoupling of the portions of your application which define <i>policy</i> from the portions which define <i>implementation</i>.  That is to say, this principle seeks to keep the portions of your application which govern what your application does (e.g. workflow, business logic, etc.) from being tightly coupled to the portions of your application which govern the low level details of how it gets done (e.g. persistence to an Sql Server database, use of Redis for caching, etc.).</p> <p>A good example of a violation of this principle, which I recall from my NHibernate days, was that once upon a time NHibernate was tightly coupled to log4net.  This was later corrected, but at one time the NHibernate assembly had a hard dependency on log4net.  You could use a different logging library for your own code if you wanted, and you could use binding redirects to use a different version of log4net if you wanted, but at one time if you had a dependency on NHibernate then you had to deploy the log4net library.  I think this went unnoticed by many due to the fact that most developers who used NHibernate also used log4net.</p> <p>When I first learned about the principle, I immediately recognized that it seemed to have limited advertized value for most business applications in light of what Udi Dahan labeled<a href="http://udidahan.com/2009/06/07/the-fallacy-of-reuse/"> The Fallacy Of ReUse</a>.  That is to say, <i>properly understood</i>, the Dependency Inversion Principle has as its primary goal the reuse of components and keeping those components decoupled from dependencies which would keep them from being easily reused with other implementation components, but your application and business logic isn’t something that is likely to ever be reused in a different context.  The take away from that is basically that the advertized value of adhering to the Dependency Inversion Principle is really more applicable to libraries like NHibernate, Automapper, etc. and not so much to that workflow your team built for Acme Inc.’s distribution system.  Nevertheless, the Dependency Inversion Principle had a practical value of implementing an architecture style Jeffrey Palermo labeled <a href="http://jeffreypalermo.com/blog/the-onion-architecture-part-1/">the Onion Architecture.</a> Specifically, in contrast to <a href="https://msdn.microsoft.com/en-us/library/ff650258.aspx"> traditional 3-layered architecture models</a> where UI, Business, and Data Access layers precluded using something like <a href="https://msdn.microsoft.com/en-us/library/ff648105.aspx?f=255&amp;MSPPError=-2147217396">Data Access Logic Components</a> to encapsulate an ORM to map data directly to entities within the Business Layer, inverting the dependencies between the Business Layer and the Data Access layer provided the ability for the application to interact with the database while also <i>seemingly </i>abstracting away the details of the data access technology used.</p> <p>While I always saw the fallacy in strictly trying to apply the Dependency Inversion Principle to invert the implementation details of how I got my data from my application layer so that I’d someday be able to use the application in a completely different context, it seemed the academically astute and in vogue way of doing Domain-driven Design at the time, seemed consistent with the GoF’s advice to program to an interface rather than an implementation, and provided an easier way to write isolation tests than trying to partially stub out ORM types.</p> <h2 id="the-catalyst">The Catalyst</h2> <p>For the longest time, I resisted using Entity Framework.  I had become fairly proficient at using NHibernate and I just saw it as plain stupid to use a framework that was years behind NHibernate in features and maturity, especially when it had such a steep learning curve.  A combination of things happened, though.  A lot of the NHibernate supporters (like many within the Alt.Net crowd) moved on to other platforms like Ruby and Node; anything with Microsoft’s name on it eventually seems to gain market share whether it’s better or not; and Entity Framework eventually did seem to mostly catch up with NHibernate in features, and surpassed it in some areas. So, eventually I found it impossible to avoid using Entity Framework which led to me trying to apply the same patterns I’d used before with this newer-to-me framework.</p> <p>To be honest, everything mostly worked, especially for the really simple stuff.  Eventually, though, I began to see little ways I had to modify my abstraction to accommodate differences in how Entity Framework did things from how NHibernate did things.  What I discovered was that, while my repositories allowed my application code to be physically decoupled from the ORM, the way I was using the repositories was in small ways semantically coupled to the framework.  I wish I had kept some sort of record every time I ran into something, as the only real thing I can recall now were motivations with certain design approaches to expose the SaveChanges method for <a href="https://lostechies.com/derekgreer/2015/11/01/survey-of-entity-framework-unit-of-work-patterns/"> Unit of Work implementations</a> I don’t want to make more of the semantic coupling argument against repositories than it’s worth, but observing little places where <a href="https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/">my abstractions were leaking</a>, combined with the pebble in my shoe of developers who I felt were far better than me were saying I shouldn’t use them lead me to begin rethinking things.</p> <h2 id="more-effective-testing-strategies">More Effective Testing Strategies</h2> <p>It was actually a few years before I stopped using repositories that I stopped stubbing out repositories.  Around 2010, I learned that you can use Test-Driven Development to achieve 100% test coverage for the code for which you’re responsible, but when you plug your code in for the first time with that team that wasn’t designing to the same specification and not writing any tests at all that things may not work.  It was then that I got turned on to Acceptance Test Driven Development.  What I found was that writing high-level subcutaneous tests (i.e. skipping the UI layer, but otherwise end-to-end) was overall easier, was possible to align with acceptance criteria contained within a user story, provided more assurance that everything worked as a whole, and was easier to get teams on board with.  Later on, I surmised that I really shouldn’t have been writing isolation tests for components which, for the most part, are just specialized facades anyway.  All an isolation test for a facade really says is “did I delegate this operation correctly” and if you’re not careful you can end up just writing a whole bunch of tests that basically just validate whether you correctly configured your mocking library.</p> <p>So, by the time I started rethinking my use of repositories, I had long since stopped using them for test isolation.</p> <h2 id="taking-the-plunge">Taking the Plunge</h2> <p>It was actually about a year after I had become convinced that repositories were unnecessary, useless abstractions that I started working with a new codebase I had the opportunity to steer.  Once I eliminated them from the equation, everything got so much simpler.   Having been repository-free for about two years now, I think I’d have a hard time joining a team that had an affinity for them.</p> <h2 id="conclusion">Conclusion</h2> <p>If you’re still using repositories and you don’t have some other hangup you still need to get over like writing unit tests for your controllers or application services then give the repository-free lifestyle a try.  I bet you’ll love it.</p> Using Manual Mocks to test the AWS SDK with Jest https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:3a424860-3707-7327-2bb1-a60b9f3be47d Tue, 20 Feb 2018 13:56:45 +0000 Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests &#8230; <p><a href="https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Using Manual Mocks to test the AWS SDK with&#160;Jest</span></a></p> <p>Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests not unit tests.</p> <p>If you are using <a href="http://bit.ly/jest-get-started" target="_blank" rel="noopener">Jest</a>, one solution is utilize the built in support for <a href="http://bit.ly/jest-manual-mocks" target="_blank" rel="noopener">manual mocks.</a>  I have found the usage of manual mocks invaluable while testing 3rd party API&#8217;s such as the AWS.  Keep in mind just because I am using manual mocks this will remove the need for using libraries like <a href="http://bit.ly/sinon-js" target="_blank" rel="noopener">SinonJs</a> (a JavaScript framework for creating stubs/mocks/spies).</p> <p>The way that manual mocks work in Jest is as follows (from the Jest website&#8217;s documentation).</p> <blockquote><p><em>Manual mocks are defined by writing a module in a <code>__mocks__/</code> subdirectory immediately adjacent to the module. For example, to mock a module called <code>user</code> in the <code>models</code> directory, create a file called <code>user.js</code> and put it in the <code>models/__mocks__</code> directory. Note that the <code>__mocks__</code> folder is case-sensitive, so naming the directory <code>__MOCKS__</code> will break on some systems. If the module you are mocking is a node module (eg: <code>fs</code>), the mock should be placed in the <code>__mocks__</code> directory adjacent to <code>node_modules</code> (unless you configured <a href="https://facebook.github.io/jest/docs/en/configuration.html#roots-array-string"><code>roots</code></a> to point to a folder other than the project root).</em></p></blockquote> <p>In my case I want to mock out the usage of the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">AWS-SDK</a> for <a href="http://bit.ly/aws-sdk-node" target="_blank" rel="noopener">Node</a>.</p> <p>To do this I created a __mocks__ folder at the root of my solution.  I then created a <a href="http://bit.ly/gist-aws-sdk-js" target="_blank" rel="noopener">aws-sdk.js</a> file inside this folder.</p> <p>Now that I have my mocks folder created with a aws-sdk.js file I am able to consume my manual mock in my jest test by simply referencing the aws-sdk via a <code>require('aws-sdk')</code> command.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('./aws-sdk'); </pre> </div> <p>With declaration of AWS above my code is able to a use the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">NPM </a>package during normal usage, or my aws-sdk.js mock when running under the Jest context.</p> <p>Below is a small sample of the code I have inside my aws-sdk.js file for my manual mock.</p> <div class="code-snippet"> <pre class="code-content">const stubs = require('./aws-stubs'); const AWS = {}; // This here is to allow/prevent runtime errors if you are using // AWS.config to do some runtime configuration of the library. // If you do not need any runtime configuration you can omit this. AWS.config = { setPromisesDependency: (arg) =&gt; {} }; AWS.S3 = function() { } // Because I care about using the S3 service's which are part of the SDK // I need to setup the correct identifier. // AWS.S3.prototype = { ...AWS.S3.prototype, // Stub for the listObjectsV2 method in the sdk listObjectsV2(params){ const stubPromise = new Promise((resolve, reject) =&gt; { // pulling in stub data from an external file to remove the noise // from this file. See the top line for how to pull this in resolve(stubs.listObjects); }); return { promise: () =&gt; { return stubPromise; } } } }; // Export my AWS function so it can be referenced via requires module.exports = AWS; </pre> </div> <p>A few things to point out in the code above.</p> <ol> <li>I chose to use the <a href="http://bit.ly/sdk-javascript-promises" target="_blank" rel="noopener">promise</a>s implementation of the listObjectsV2.  Because of this I need to return a promise method as my result on my listObjectsV2 function.  I am sure there are other ways to accomplish this, but this worked and is pretty easy.</li> <li>My function is returning stub data, but this data is described in a separate file called aws-stubs.js which sites along side of my aws-sdk.js file.  I went this route to remove the noise of having the stub data inside my aws-adk file.  You can see a full example of this <a href="http://bit.ly/gist-aws-stub-data" target="_blank" rel="noopener">here</a>.</li> </ol> <p>Now that I have everything setup my tests will no longer attempt to hit the actually aws-sdk, but when running in non-test mode they will.</p> <p>Till next time,</p> Configure Visual Studio Code to debug Jest Tests https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/ Maintainer of Code, pusher of bits… urn:uuid:31928626-b984-35f6-bf96-5bfb71e16208 Fri, 16 Feb 2018 21:33:03 +0000 If you have not given Visual Studio Code a spin you really should, especially if  you are doing web/javascript/Node development. One super awesome feature of VS Code is the ability to easily configure the ability to debug your Jest (should work just fine with other JavaScript testing frameworks) tests.  I have found that most of &#8230; <p><a href="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/" class="more-link">Continue reading <span class="screen-reader-text">Configure Visual Studio Code to debug Jest&#160;Tests</span></a></p> <p>If you have not given <a href="https://code.visualstudio.com/" target="_blank" rel="noopener">Visual Studio Code</a> a spin you really should, especially if  you are doing web/javascript/Node development.</p> <p>One super awesome feature of VS Code is the ability to easily configure the ability to debug your <a href="https://facebook.github.io/jest/" target="_blank" rel="noopener">Jest </a>(should work just fine with other JavaScript testing frameworks) tests.  I have found that most of the time I do not need to actually step into the debugger when writing tests, but there are times that using <code>console.log</code> is just too much friction and I want to step into the debugger.</p> <p>So how do we configure VS Code?</p> <p>First you  will need to install the <a href="https://www.npmjs.com/package/jest-cli" target="_blank" rel="noopener">Jest-Cli</a> NPM package (I am assuming you already have Jest setup to run your tests, if you do not please read the <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Getting-Started</a> docs).  If you fail to do this step you will get the following error in Code when you try to run the debugger.</p> <p><img data-attachment-id="78" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jestcli/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" data-orig-size="702,75" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestCLI" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=640" class="alignnone size-full wp-image-78" src="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" alt="JestCLI" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640 640w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=300 300w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png 702w" sizes="(max-width: 640px) 100vw, 640px" /></p> <p>After you have Jest-Cli installed you will need to configure VS Code for debugging.  To do this open up the configuration by clicking Debug -&gt; Open Configurations.  This will open up a file called launch.json.</p> <p>Once launch.json is open add the following configuration</p> <div class="code-snippet"> <pre class="code-content"> { "name": "Jest Tests", "type": "node", "request": "launch", "program": "${workspaceRoot}/node_modules/jest-cli/bin/jest.js", "stopOnEntry": false, "args": ["--runInBand"], "cwd": "${workspaceRoot}", "preLaunchTask": null, "runtimeExecutable": null, "runtimeArgs": [ "--nolazy" ], "env": { "NODE_ENV": "development" }, "console": "internalConsole", "sourceMaps": false, "outFiles": [] } </pre> </div> <p>Here is a gist of a working <a href="https://gist.github.com/derikwhittaker/331d4a5befddf7fc6b2599f1ada5d866" target="_blank" rel="noopener">launch.json</a> file.</p> <p>After you save the file you are almost ready to start your debugging.</p> <p>Before you can debug you will want to open the debug menu (the bug icon on the left toolbar).   This will show a drop down menu with different configurations.  Make sure &#8216;Jest Test&#8217; is selected.</p> <p><img data-attachment-id="79" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jesttest/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" data-orig-size="240,65" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestTest" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" class="alignnone size-full wp-image-79" src="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" alt="JestTest" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png 240w, https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=150 150w" sizes="(max-width: 240px) 100vw, 240px" /></p> <p>If you have this setup correctly you should be able to set breakpoints and hit F5.</p> <p>Till next time,</p> On Migrating Los Techies to Github Pages https://lostechies.com/derekgreer/2018/02/16/on-migrating-lostechies-to-github-pages/ Los Techies urn:uuid:74de4506-44e0-f605-61cb-8ffe972f6787 Fri, 16 Feb 2018 20:00:00 +0000 We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available here that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation. <p>We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available <a href="https://girliemac.com/blog/2013/12/27/wordpress-to-jekyll/">here</a> that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation.</p> <h2 id="step-1-exporting-content">Step 1: Exporting Content</h2> <p>Having recently migrated <a href="http://aspiringcraftsman.com">my personal blog</a> from WordPress to Github Pages using the aforementioned guide, I thought the process of doing the same for Los Techies would be relatively easy. Unfortunately, due to the fact that we had a woefully out-of-date installation of WordPress, migrating Los Techies proved to be a bit problematic. First, the WordPress to Jekyll Exporter plugin wasn’t compatible with our version of WordPress. Additionally, our installation of WordPress couldn’t be upgraded in place for various reasons. As a result, I ended up taking the rather labor-intensive path of exporting each author’s content using the default WordPress XML export and then, for each author, importing into an up-to-date installation of WordPress using the hosting site with which I previously hosting my personal blog, exporting the posts using the Jekyll Exporter plugin, and then deleting the posts in preparation for the next iteration. This resulted in a collection of zipped, mostly ready posts for each author.</p> <h2 id="step-2-configuring-authors">Step 2: Configuring Authors</h2> <p>Our previous platform utilized the multi-site features of WordPress to facilitate a single site with multiple contributors. By default, Jekyll looks for content within a special folder in the root of the site named _posts, but there are several issues with trying to represent multiple contributors within the _posts folder. Fortunately Jekyll has a feature called Collections which allows you to set up groups of posts which can each have their own associated configuration properties. Once each of the author’s posts were copied to corresponding collection folders, a series of scripts were written to create author-specific index.html, archive.html, and tags.html files which are used by a custom post layout. Additionally, due to the way the WordPress content was exported, the permalinks generated for each post did not reflect the author’s subdirectory, so another script was written to strip out all the generated permalinks.</p> <h2 id="step-3-correcting-liquid-errors">Step 3: Correcting Liquid Errors</h2> <p>Jekyll uses a language called Liquid as its templating engine. Once all the content was in place, all posts which contained double curly braces were interpreted as Liquid commands which ended up breaking the build process. For that, each offending post had to be edited to wrap the content in Liquid directives {% raw %} … {% endraw %} to keep the content from being interpreted by the Liquid parser. Additionally, there were a few other odd things which were causing issues (such as posts with non-breaking space characters) for which more scripts were written to modify the posts to non-offending content.</p> <h2 id="step-4-enabling-disqus">Step 4: Enabling Disqus</h2> <p>The next step was to get Disqus comments working for the posts. By default, Disqus will use the page URL as the page identifier, so as long as the paths match then enabling Disqus should just work. The WordPress Disqus plugin we were using utilized a unique post id and guid as the Disqus page identifier, so the Disqus javascript had to be configured to use these properties. These values were preserved by the Jekyll exporter, but unfortunately the generated id property in the Jekyll front matter was getting internally overridden by Jekyll so another script had to be written to modify all the posts to rename the properties used for these values. Properties were added to the Collection configuration in the main _config.yml to designate the Disqus shortname for each author and allow people to toggle whether disqus was enabled or disabled for their posts.</p> <h2 id="step-5-converting-gists">Step 5: Converting Gists</h2> <p>Many authors at Los Techies used a Gist WordPress plugin to embed code samples within their posts. Github Pages supports a jekyll-gist plugin, so another script was written to modify all the posts to use Liquid syntax to denote the gists. This mostly worked, but there were still a number of posts which had to be manually edited to deal with different ways people were denoting their gists. In retrospect, it would have been better to use JavaScript rather than the Jekyll gist plugin due to the size of the Los Techies site. Every plugin you use adds time to the overall build process which can become problematic as we’ll touch on next.</p> <h2 id="step-6-excessive-build-time-mitigation">Step 6: Excessive Build-time Mitigation</h2> <p>The first iteration of the conversion used the Liquid syntax for generating the sidebar content which lists recent site-wide posts, recent author-specific posts, and the list of contributing authors. This resulted in extremely long build times, but it worked and who cares once the site is rendered, right? Well, what I found out was that Github has a hard cut off of 10 minutes for Jekyll site builds. If your site doesn’t build within 10 minutes, the process gets killed. At first I thought “Oh no! After all this effort, Github just isn’t going to support a site our size!” I then realized that rather than having every page loop over all the content, I could create a Jekyll template to generate JSON content one time and then use JavaScript to retrieve the content and dynamically generate the sidebar DOM elements. This sped up the build significantly, taking the build from close to a half-hour to just a few minutes.</p> <h2 id="step-8-converting-wordpress-uploaded-content">Step 8: Converting WordPress Uploaded Content</h2> <p>Another headache that presented itself is how WordPress represented uploaded content. Everything that anyone had ever uploaded to the site for images and downloads used within their posts were stored in a cryptic folder structure. Each folder had to be interrogated to see which files contained therein matched what author, the folder structure had to be reworked to accommodate the nature of the Jekyll site, and more scripts had to be written to edit everyone’s posts to change paths to the new content. Of course, the scripts only worked for about 95% of the posts, a number of posts had to be edited manually to fix things like non-printable characters being used in file names, etc.</p> <h2 id="step-9-handling-redirects">Step 9: Handling Redirects</h2> <p>The final step to get the initial version of the conversion complete was to handle redirects which were formally being handled by .httpacess. The Los Techies site started off using Community Server prior to migrating to WordPress and redirects were set up using .httpaccess to maintain the paths to all the previous content locations. Github Pages doesn’t support .httpaccess, but it does support a Jekyll redirect plugin. Unfortunately, it requires adding a redirect property to each post requiring a redirect and we had several thousand, so I had to write another script to read the .httpaccess file and figure out which post went with each line. Another unfortunate aspect of using the Jekyll redirect plugin is that it adds overhead to the build time which, as discussed earlier, can become an issue.</p> <h2 id="step-10-enabling-aggregation">Step 10: Enabling Aggregation</h2> <p>Once the conversion was complete, I decided to dedicate some time to figuring out how we might be able to add the ability to aggregate posts from external feeds. The first step to this was finding a service that could aggregate feeds together. You might think there would be a number of things that do this, and while I did find at least a half-dozen services, there were only a couple I found that allowed you to maintain a single feed and add/remove new feeds while preserving the aggregated feed. Most seemed to only allow you to do a one-time aggregation. For this I settled on a site named <a href="http://feed.informer.com">feed.informer.com</a>. Next, I replaced the landing page with JavaScript that dynamically built the site from the aggregated feed along with replacing the recent author posts section that did the same and a special external template capable of making an individual post look like it’s actually hosted on Los Techies. The final result was a site that displays a mixture of local content along with aggregated content.</p> <h2 id="conclusion">Conclusion</h2> <p>Overall, the conversion was way more work than I anticipated, but I believe worth the effort. The site is now much faster than it used to be and we aren’t having to pay a hosting service to host our site.</p> Going Async with Node AWS SDK with Express https://derikwhittaker.blog/2018/02/13/going-async-with-node-aws-sdk-with-express/ Maintainer of Code, pusher of bits… urn:uuid:d4750cda-8c6e-8b2f-577b-78c746ee6ebd Tue, 13 Feb 2018 13:00:30 +0000 When building applications in Node/Express you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The 'old school' way was to use call backs, which often led to callback hell.  Than came along Promises which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for async/await was introduced and this really has cleaned up and enhanced the readability of your code. <p>When building applications in <a href="https://nodejs.org/en/" target="_blank" rel="noopener">Node</a>/<a href="http://expressjs.com/" target="_blank" rel="noopener">Express </a>you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The &#8216;old school&#8217; way was to use call backs, which often led to <a href="http://callbackhell.com/" target="_blank" rel="noopener">callback hell</a>.  Than came along <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise">Promises</a> which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function" target="_blank" rel="noopener">async</a>/<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await" target="_blank" rel="noopener">await</a> was introduced and this really has cleaned up and enhanced the readability of your code.</p> <p>Having the ability to use async/await is great, and is supported out of the box w/ Express.  But what do you do when you using a library which still wants to use promises or callbacks? The case in point for this article is <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">AWS Node SDK</a>.</p> <p>By default if you read through the AWS SDK documentation the examples lead you to believe that you need to use callbacks when implementing the SDK.  Well this can really lead to some nasty code in the world of Node/Express.  However, as of <a href="https://aws.amazon.com/blogs/developer/support-for-promises-in-the-sdk/" target="_blank" rel="noopener">v2.3.0</a> of the AWS SDK there is support for Promises.  This is much cleaner than using callbacks, but still poses a bit of an issue if you want to use async/await in your Express routes.</p> <p>However, with a bit of work you can get your promise based AWS calls to play nicely with your async/await based Express routes.  Lets take a look at how we can accomplish this.</p> <p>Before you get started I am going to make a few assumptions.</p> <ol> <li>You already have a Node/Express application setup</li> <li>You already have the AWS SDK for Node installed, if not read <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">here</a></li> </ol> <p>The first thing we are going to need to do is add reference to our AWS SDK and configure it to use promises.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('aws-sdk'); AWS.config.setPromisesDependency(null); </pre> </div> <p>After we have our SDK configured we can implement our route handler.  In my example here I am placing all the logic inside my handler.  In a real code base I would suggest better deconstruction of this code into smaller parts.</p> <div class="code-snippet"> <pre class="code-content">const express = require('express'); const router = express.Router(); const s3 = new AWS.S3(); router.get('/myRoute', async (req, res) =&gt; { const controller = new sitesController(); const params = req.params; const params = { Bucket: "bucket_name_here" }; let results = {}; var listPromise = s3.listObjects(params).promise(); listPromise.then((data) =&gt; { results = data; }); await Promise.all([listPromise]); res.json({data: results }) }) module.exports = router; </pre> </div> <p>Lets review the code above and call out a few important items.</p> <p>The first thing to notice is the addition of the <code>async</code> keyword in my route handler.  This is what allows us to use async/await in Node/Express.</p> <p>The next thing to look at is how I am calling the s3.listObjects.  Notice I am <strong>NOT </strong>providing a callback to the method, but instead I am chaining with .promise().  This is what instructs the SDK to use promises vs callbacks.  Once I have my callback I chain a &#8216;then&#8217; in order to handle my response.</p> <p>The last thing to pay attention to is the line with <code>await Promise.All([listPromise]);</code> This is the magic forces our route handler to not return prior to the resolution of all of our Promises.  Without this your call would exit prior to the listObjects call completing.</p> <p>Finally, we are simply returning our data from the listObjects call via <code>res.json</code> call.</p> <p>That&#8217;s it, pretty straight forward, once you learn that the AWS SDK supports something other than callbacks.</p> <p>Till next time,</p> Unable To Access Mysql With Root and No Password After New Install On Ubuntu https://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/ Jason Meridth urn:uuid:f81a51eb-8405-7add-bddb-f805b183347e Wed, 31 Jan 2018 00:13:00 +0000 <p>This bit me in the rear end again today. Had to reinstall mysql-server-5.7 for other reasons.</p> <p>You just installed <code class="highlighter-rouge">mysql-server</code> locally for your development environment on a recent version of Ubuntu (I have 17.10 artful installed). You did it with a blank password for <code class="highlighter-rouge">root</code> user. You type <code class="highlighter-rouge">mysql -u root</code> and you see <code class="highlighter-rouge">Access denied for user 'root'@'localhost'</code>.</p> <p><img src="https://blog.jasonmeridth.com/images/wat.png" alt="wat" /></p> <p>Issue: Because you chose to not have a password for the <code class="highlighter-rouge">root</code> user, the <code class="highlighter-rouge">auth_plugin</code> for my MySQL defaulted to <code class="highlighter-rouge">auth_socket</code>. That means if you type <code class="highlighter-rouge">sudo mysql -u root</code> you will get in. If you don’t, then this is NOT the fix for you.</p> <p>Solution: Change the <code class="highlighter-rouge">auth_plugin</code> to <code class="highlighter-rouge">mysql_native_password</code> so that you can use the root user in the database.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo mysql -u root mysql&gt; USE mysql; mysql&gt; UPDATE user SET plugin='mysql_native_password' WHERE User='root'; mysql&gt; FLUSH PRIVILEGES; mysql&gt; exit; $ sudo systemctl restart mysql $ sudo systemctl status mysql </code></pre></div></div> <p><strong>NB</strong> ALWAYS set a password for mysql-server in staging/production.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/">Unable To Access Mysql With Root and No Password After New Install On Ubuntu</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on January 30, 2018.</p> New Job https://blog.jasonmeridth.com/posts/new-job/ Jason Meridth urn:uuid:102e69a7-2b63-e750-2fa5-f46372d4d7c1 Mon, 08 Jan 2018 18:13:00 +0000 <p>Well, it is a new year and I’ve started a new job. I am now a Senior Software Engineer at <a href="https://truelinkfinancial.com">True Link Financial</a>.</p> <p><img src="https://blog.jasonmeridth.com/images/tllogo.png" alt="true link financial logo" /></p> <p>After interviewing with the co-founders Kai and Claire and their team, I knew I wanted to work here.</p> <p><strong>TL;DR</strong>: True Link: We give elderly and disable (really, anyone) back their financial freedom where they may not usually have it.</p> <p>Longer Version: Imagine you have an elderly family member who may start showing signs of dimensia. You can give them a True Link card and administer their card. You link it to their bank account or another source of funding and you can set limitations on when, where and how the card can be used. The family member feels freedom by not having to continually ask for money but is also protected by scammers and non-friendly people (yep, they exist).</p> <p>The customer service team, the marketing team, the product team, the engineering team and everyone else at True Link are amazing.</p> <p>For any nerd readers, the tech stack is currently Rails, React, AWS, Ansible. We’ll be introducing Docker and Kubernetes soon hopefully, but always ensuring the right tools for the right job.</p> <p>Looking forward to 2018.</p> <p>Cheers.</p> <p><a href="https://blog.jasonmeridth.com/posts/new-job/">New Job</a> was originally published by Jason Meridth at <a href="https://blog.jasonmeridth.com">Jason Meridth</a> on January 08, 2018.</p> Hello, React! – A Beginner’s Setup Tutorial https://lostechies.com/derekgreer/2017/05/25/hello-react-a-beginners-setup-tutorial/ Los Techies urn:uuid:896513a4-c41d-c8ea-820b-fbc3e2b5a442 Thu, 25 May 2017 08:00:32 +0000 React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process. <p>React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process.</p> <h2 id="a-simple-tutorial">A Simple Tutorial</h2> <p>This tutorial is merely intended to help walk you through the steps to getting a simple React example up and running. When you’re ready to dive into actually learning the React framework, a great list of tutorials can be found <a href="http://andrewhfarmer.com/getting-started-tutorials/">here.</a></p> <p>There are a several build, transpiler, or bundling tools from which to select when working with React. For this tutorial, we’ll be using be using Node, NPM, Webpack, and Babel.</p> <h2 id="step-1-install-node">Step 1: Install Node</h2> <p>Download and install Node for your target platform. Node distributions can be obtained <a href="https://nodejs.org/en/">here</a>.</p> <h2 id="step-2-create-a-project-folder">Step 2: Create a Project Folder</h2> <p>From a command line prompt, create a folder where you plan to develop your example.</p> <pre>$&gt; mkdir hello-react </pre> <h2 id="step-3-initialize-project">Step 3: Initialize Project</h2> <p>Change directory into the example folder and use the Node Package Manager (npm) to initialize the project:</p> <pre>$&gt; cd hello-react $&gt; npm init --yes </pre> <p>This results in the creation of a package.json file. While not technically necessary for this example, creating this file will allow us to persist our packaging and runtime dependencies.</p> <h2 id="step-4-install-react">Step 4: Install React</h2> <p>React is broken up into a core framework package and a package related to rendering to the Document Object Model (DOM).</p> <p>From the hello-react folder, run the following command to install these packages and add them to your package.json file:</p> <pre>$&gt; npm install --save-dev react react-dom </pre> <h2 id="step-5-install-babel">Step 5: Install Babel</h2> <p>Babel is a transpiler, which is to say it’s a tool from converting one language or language version to another. In our case, we’ll be converting EcmaScript 2015 to EcmaScript 5.</p> <p>From the hello-react folder, run the following command to install babel:</p> <pre>$&gt; npm install --save-dev babel-core </pre> <h2 id="step-6-install-webpack">Step 6: Install Webpack</h2> <p>Webpack is a module bundler. We’ll be using it to package all of our scripts into a single script we’ll include in our example Web page.</p> <p>From the hello-react folder, run the following command to install webpack globally:</p> <pre>$&gt; npm install webpack --global </pre> <h2 id="step-7-install-babel-loader">Step 7: Install Babel Loader</h2> <p>Babel loader is a Webpack plugin for using Babel to transpile scripts during the bundling process.</p> <p>From the hello-react folder, run the following command to install babel loader:</p> <pre>$&gt; npm install --save-dev babel-loader </pre> <h2 id="step-8-install-babel-presets">Step 8: Install Babel Presets</h2> <p>Babel presets are collections of plugins needed to support a given feature. For example, the latest version of babel-preset-es2015 at the time this writing will install 24 plugins which enables Babel to transpile ECMAScript 2015 to ECMAScript 5. We’ll be using presets for ES2015 as well as presets for React. The React presets are primarily needed for processing of <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX</a>.</p> <p>From the hello-react folder, run the following command to install the babel presets for both ES2015 and React:</p> <pre>$&gt; npm install --save-dev babel-preset-es2015 babel-preset-react </pre> <h2 id="step-9-configure-babel">Step 9: Configure Babel</h2> <p>In order to tell Babel which presets we want to use when transpiling our scripts, we need to provide a babel config file.</p> <p>Within the hello-react folder, create a file named .babelrc with the following contents:</p> <pre>{ "presets" : ["es2015", "react"] } </pre> <h2 id="step-10-configure-webpack">Step 10: Configure Webpack</h2> <p>In order to tell Webpack we want to use Babel, where our entry point module is, and where we want the output bundle to be created, we need to create a Webpack config file.</p> <p>Within the hello-react folder, create a file named webpack.config.js with the following contents:</p> <pre>const path = require('path'); module.exports = { entry: './app/index.js', output: { path: path.resolve('dist'), filename: 'index_bundle.js' }, module: { rules: [ { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ } ] } } </pre> <h2 id="step-11-create-a-react-component">Step 11: Create a React Component</h2> <p>For our example, we’ll just be creating a simple component which renders the text “Hello, React!”.</p> <p>First, create an app sub-folder:</p> <pre>$&gt; mkdir app </pre> <p>Next, create a file named app/index.js with the following content:</p> <pre>import React from 'react'; import ReactDOM from 'react-dom'; class HelloWorld extends React.Component { render() { return ( &lt;div&gt; Hello, React! &lt;/div&gt; ) } }; ReactDOM.render(&lt;HelloWorld /&gt;, document.getElementById('root')); </pre> <p>Briefly, this code includes the react and react-dom modules, defines a HelloWorld class which returns an element containing the text “Hello, React!” expressed using <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX syntax</a>, and finally renders an instance of the HelloWorld element (also using JSX syntax) to the DOM.</p> <p>If you’re completely new to React, don’t worry too much about trying to fully understand the code. Once you’ve completed this tutorial and have an example up and running, you can move on to one of the aforementioned tutorials, or work through <a href="https://facebook.github.io/react/docs/hello-world.html">React’s Hello World example</a> to learn more about the syntax used in this example.</p> <div class="note"> <p> Note: In many examples, you will see the following syntax: </p> <pre> var HelloWorld = React.createClass( { render() { return ( &lt;div&gt; Hello, React! &lt;/div&gt; ) } }); </pre> <p> This syntax is how classes were defined in older versions of React and will therefore be what you see in older tutorials. As of React version 15.5.0 use of this syntax will produce the following warning: </p> <p style="color: red"> Warning: HelloWorld: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you&#8217;re not yet ready to migrate, create-react-class is available on npm as a drop-in replacement. </p> </div> <h2 id="step-12-create-a-webpage">Step 12: Create a Webpage</h2> <p>Next, we’ll create a simple html file which includes the bundled output defined in step 10 and declare a &lt;div&gt; element with the id “root” which is used by our react source in step 11 to render our HelloWorld component.</p> <p>Within the hello-react folder, create a file named index.html with the following contents:</p> <pre>&lt;html&gt; &lt;div id="root"&gt;&lt;/div&gt; &lt;script src="./dist/index_bundle.js"&gt;&lt;/script&gt; &lt;/html&gt; </pre> <h2 id="step-13-bundle-the-application">Step 13: Bundle the Application</h2> <p>To convert our app/index.js source to ECMAScript 5 and bundle it with the react and react-dom modules we’ve included, we simply need to execute webpack.</p> <p>Within the hello-react folder, run the following command to create the dist/index_bundle.js file reference by our index.html file:</p> <pre>$&gt; webpack </pre> <h2 id="step-14-run-the-example">Step 14: Run the Example</h2> <p>Using a browser, open up the index.html file. If you’ve followed all the steps correctly, you should see the following text displayed:</p> <pre>Hello, React! </pre> <h2 id="conclusion">Conclusion</h2> <p>Congratulations! After completing this tutorial, you should have a pretty good idea about the steps involved in getting a basic React app up and going. Hopefully this will save some absolute beginners from spending too much time trying to piece these steps together.</p> Up into the Swarm https://lostechies.com/gabrielschenker/2017/04/08/up-into-the-swarm/ Los Techies urn:uuid:844f7b20-25e5-e658-64f4-e4d5f0adf614 Sat, 08 Apr 2017 20:59:26 +0000 Last Thursday evening I had the opportunity to give a presentation at the Docker Meetup in Austin TX about how to containerize a Node JS application and deploy it into a Docker Swarm. I also demonstrated techniques that can be used to reduce friction in the development process when using containers. <p>Last Thursday evening I had the opportunity to give a presentation at the Docker Meetup in Austin TX about how to containerize a Node JS application and deploy it into a Docker Swarm. I also demonstrated techniques that can be used to reduce friction in the development process when using containers.</p> <p>The meeting was recorded but unfortunately sound only is available after approximately 16 minutes. You might want to just scroll forward to this point.</p> <p>Video: <a href="https://youtu.be/g786WiS5O8A">https://youtu.be/g786WiS5O8A</a></p> <p>Slides and code: <a href="https://github.com/gnschenker/pets-node">https://github.com/gnschenker/pets-node</a></p> New Year, New Blog https://lostechies.com/jimmybogard/2017/01/26/new-year-new-blog/ Los Techies urn:uuid:447d30cd-e297-a888-7ccc-08c46f5a1688 Thu, 26 Jan 2017 03:39:05 +0000 One of my resolutions this year was to take ownership of my digital content, and as such, I’ve launched a new blog at jimmybogard.com. I’m keeping all my existing content on Los Techies, where I’ve been humbled to be a part of for the past almost 10 years. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it’s been a great ride. <p>One of my resolutions this year was to take ownership of my digital content, and as such, I’ve launched a new blog at <a href="https://jimmybogard.com/">jimmybogard.com</a>. I’m keeping all my existing content on <a href="https://jimmybogard.lostechies.com/">Los Techies</a>, where I’ve been humbled to be a part of for the past <a href="http://grabbagoft.blogspot.com/2007/11/joining-los-techies.html">almost 10 years</a>. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it’s been a great ride.</p> <p>If you’re still subscribed to my FeedBurner feed – nothing to change, you’ll get everything as it should. If you’re only subscribed to the Los Techies feed…well you’ll need to <a href="http://feeds.feedburner.com/GrabBagOfT">subscribe to my feed</a> now.</p> <p>Big thanks to everyone at Los Techies that’s put up with me over the years, especially our site admin <a href="https://jasonmeridth.com/">Jason</a>, who has become far more knowledgable about WordPress than he ever probably wanted.</p>