Los Techies http://feed.informer.com/digests/ZWDBOR7GBI/feeder Los Techies Respective post owners and feed distributors Thu, 08 Feb 2018 14:40:57 +0000 Feed Informer http://feed.informer.com/ Ensuring componentDidMount is not called in Unit Tests https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/ Maintainer of Code, pusher of bits… urn:uuid:da94c1a3-2de4-a90c-97f5-d7361397a33c Thu, 22 Feb 2018 19:45:53 +0000 If you are building a ReactJs you will often times implement componentDidMount on your components.  This is very handy at runtime, but can pose an issue for unit tests. If you are building tests for your React app you are very likely using enzyme to create instances of your component.  The issue is that when enzyme creates &#8230; <p><a href="https://derikwhittaker.blog/2018/02/22/ensuring-componentdidmount-is-not-called-in-unit-tests/" class="more-link">Continue reading <span class="screen-reader-text">Ensuring componentDidMount is not called in Unit&#160;Tests</span></a></p> <p>If you are building a <a href="https://reactjs.org/" target="_blank" rel="noopener">ReactJs</a> you will often times implement <code>componentDidMount</code> on your components.  This is very handy at runtime, but can pose an issue for unit tests.</p> <p>If you are building tests for your React app you are very likely using <a href="http://airbnb.io/projects/enzyme/" target="_blank" rel="noopener">enzyme</a> to create instances of your component.  The issue is that when enzyme creates the component it invokes the lifecyle methods, like <code>componentDidMount</code>.  Sometimes we do not want this to be called, but how to suppress this?</p> <p>I have found 2 different ways to suppress/mock <code>componentDidMount</code>.</p> <p>Method one is to redefine <code>componentDidMount</code> on your component for your tests.  This could have interesting side effects so use with caution.</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { beforeAll(() =&gt; { YourComponent.prototype.componentDidMount = () =&gt; { // can omit or add custom logic }; }); }); </pre> </div> <p>Basically above I am just redefining the componentDidMount method on my component.  This works and allows you to have custom logic.  Be aware that when doing above you will have changed the implementation for your component for the lifetime of your test session.</p> <p>Another solution is to use a mocking framework like <a href="http://sinonjs.org/" target="_blank" rel="noopener">SinonJs</a>.  With Sinon you can stub out the <code>componentDidMount</code> implementation as seen below</p> <div class="code-snippet"> <pre class="code-content"> describe('UsefullNameHere', () =&gt; { let componentDidMountStub = null; beforeAll(() =&gt; { componentDidMountStub = sinon.stub(YourComponent.prototype, 'componentDidMount').callsFake(function() { // can omit or add custom logic }); }); afterAll(() =&gt; { componentDidMountStub.restore(); }); }); </pre> </div> <p>Above I am using .stub to redefine the method.  I also added .<a href="http://sinonjs.org/releases/v4.3.0/stubs/" target="_blank" rel="noopener">callsFake</a>() but this can be omitted if you just want to ignore the call.  You will want to make sure you restore your stub via the afterAll, otherwise you will have stubbed out the call for the lifetime of your test session.</p> <p>Till next time,</p> The False Dichotomy of Monoliths and Microservices https://jimmybogard.com/the-false-dichotomy-of-monoliths-and-microservices/ Jimmy Bogard urn:uuid:d7521657-c9c8-0677-73e8-f39871afe8fe Wed, 21 Feb 2018 20:35:55 +0000 <p>When learning about microservices, you're nearly always introduced to the concept of a monolith. If you're not doing microservices, you're building a <a href="http://microservices.io/patterns/monolithic.html">monolith</a>. If you're not building a monolith, you must go with microservices. If you're building a monolith, perhaps you're doing it well and it's a <a href="https://m.signalvnoise.com/the-majestic-monolith-29166d022228">majestic monolith</a>.</p> <p>From</p> <p>When learning about microservices, you're nearly always introduced to the concept of a monolith. If you're not doing microservices, you're building a <a href="http://microservices.io/patterns/monolithic.html">monolith</a>. If you're not building a monolith, you must go with microservices. If you're building a monolith, perhaps you're doing it well and it's a <a href="https://m.signalvnoise.com/the-majestic-monolith-29166d022228">majestic monolith</a>.</p> <p>From my early encounters with microservices, this dichotomy bothered me. The discussion of a monolith was nearly always followed its purported problems. Difficult to develop. Difficult to deploy. Difficult to scale.</p> <p>Which I always found odd, I've built applications that weren't microservices that had none of these properties, so what was different?</p> <h3 id="redefiningmicroservicesandmonoliths">Redefining Microservices and Monoliths</h3> <p>The problem is that this is presented as a dichotomy. But it is a false dichotomy. It isn't an either/or choice. And we can get away from this false dichotomy by refining our definition of microservice (which, as even <a href="https://en.wikipedia.org/wiki/Microservices">Wikipedia </a> shows, doesn't have consensus about what the term means).</p> <p>From <a href="https://jimmybogard.com/my-microservices-faq/">my microservices FAQ</a>:</p> <blockquote> <p>A microservice is a service with a design focus towards the smallest autonomous boundary.</p> </blockquote> <p>And a monolith:</p> <blockquote> <p>A monolith is software whose design, information model, and interface combine multiple competing and interfering domains into one single application and data model. There is no longer consensus from users and designers of the software on terms, modeling, information design, or interface. The coupling between competing models makes future changes to the system difficult or impossible.</p> </blockquote> <p>We can still build services that <em>don't</em> strive for the smallest autonomous boundary. And we can build monoliths out of distributed components. And our microservices can also be individual monoliths, if they can't meet the defined operational, business, and delivery objectives.</p> <p>It's also why I see the question of "do I start with microservices or monoliths" to only have one answer: <a href="http://www.awakin.org/read/view.php?tid=583">Mu</a>. No one strives for a monolith, but they may not have a good enough understanding of the business to understand where the service boundaries <em>should</em> be. You're asking the wrong question, you need to unask the question and start over.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=BayP5yixnhI:Ic_kCcmQGo4:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=BayP5yixnhI:Ic_kCcmQGo4:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=BayP5yixnhI:Ic_kCcmQGo4:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=BayP5yixnhI:Ic_kCcmQGo4:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=BayP5yixnhI:Ic_kCcmQGo4:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/BayP5yixnhI" height="1" width="1" alt=""/> Migrating Contoso University Example to Razor Pages https://jimmybogard.com/migrating-contoso-university-example-to-razor-pages/ Jimmy Bogard urn:uuid:2c91fd9c-3bf9-e2c7-7287-b5fcf0584b7c Wed, 21 Feb 2018 15:24:16 +0000 <p>A coworker noticed that it looked like <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio">Razor Pages</a> were the new "recommended" way of building server-side rendered web applications in ASP.NET Core 2.0. I hadn't paid much attention because at first glance they looked like Web Forms.</p> <p>However, that's not the case. I forked my <a href="https://github.com/jbogard/ContosoUniversityDotNetCore/">Contoso University</a></p> <p>A coworker noticed that it looked like <a href="https://docs.microsoft.com/en-us/aspnet/core/mvc/razor-pages/?tabs=visual-studio">Razor Pages</a> were the new "recommended" way of building server-side rendered web applications in ASP.NET Core 2.0. I hadn't paid much attention because at first glance they looked like Web Forms.</p> <p>However, that's not the case. I forked my <a href="https://github.com/jbogard/ContosoUniversityDotNetCore/">Contoso University example</a> (how I like to build MVC applications) and updated it to use Razor Pages instead. Razor Pages are similar to a controller-less action, and fit very well into the "feature folder" style we use on our projects here at <a href="https://headspring.com">Headspring</a>. You can check out <a href="https://msdn.microsoft.com/en-us/magazine/mt842512">Steve Smith's MSDN Magazine article for more</a>.</p> <p>Back to our example, let's look first at our typical MVC application. We use:</p> <ul> <li>AutoMapper</li> <li>MediatR</li> <li>HtmlTags</li> <li>FluentValidation</li> <li>Feature Folders</li> </ul> <p>And it winds up looking something like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0027.png" alt=""></p> <p>We're able to move the controllers into the feature folder, but the controllers themselves are rather pointless. They're there basically to satisfy routing.</p> <p>Years ago folks looked at building controller-less actions, but I'm hesitant to adopt divergence from the fundamental building blocks of the framework I'm on. Extend, configure, but not abandon.</p> <p>So I left those controllers there. The files next to the views contain:</p> <ul> <li>View Models</li> <li>MediatR request/responses</li> <li>MediatR handlers</li> <li>Validators</li> </ul> <p>Most of our applications don't actually use inner classes, but adjacent classes (instead of <code>Foo.Query</code> in <code>Foo.cs</code>, <code>FooQuery.cs</code>. But everything is together.</p> <p>My initial skepticism with Razor Pages came from it looking like the examples having everything shoved in the View. That, combined with a deep distaste for the abomination that is Web Forms.</p> <p>With that, I wanted to understand what our typical architecture looked like with Razor Pages.</p> <h3 id="migratingtorazorpages">Migrating to Razor Pages</h3> <p>Migrating is fairly straightforward, it was basically renaming the <code>Features</code> folder to <code>Pages</code>, and renaming my vertical slice files to have a <code>cshtml.cs</code> extension:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0028.png" alt=""></p> <p>Next I needed to make my vertical slice class inherit from <code>PageModel</code>:</p> <pre><code class="language-c#">public class Create : PageModel </code></pre> <p>This class will now handle the GET/POST requests instead of my controller. I needed to move my original controller actions (very complicated):</p> <pre><code class="language-c#">public async Task&lt;IActionResult&gt; Edit(Edit.Query query) { var model = await _mediator.Send(query); return View(model); } [HttpPost] [ValidateAntiForgeryToken] public async Task&lt;IActionResult&gt; Edit(Edit.Command command) { await _mediator.Send(command); return this.RedirectToActionJson(nameof(Index)); } </code></pre> <p>Over to the Razor Pages equivalent (<code>On&lt;Method&gt;Async</code>):</p> <pre><code class="language-c#">public class Edit : PageModel { private readonly IMediator _mediator; [BindProperty] public Command Data { get; set; } public Edit(IMediator mediator) =&gt; _mediator = mediator; public async Task OnGetAsync(Query query) =&gt; Data = await _mediator.Send(query); public async Task&lt;IActionResult&gt; OnPostAsync() { await _mediator.Send(Data); return this.RedirectToPageJson(nameof(Index)); } </code></pre> <p>The thing I needed to figure out was that model binding and view models are now just properties on my <code>PageModel</code>. I settled on a convention of having a property named <code>Data</code> rather than making up some property name for every page. This also kept with my convention of having only one model used in my views.</p> <p>Links with Razor Pages are a little different, so I had to go through and replace my tags from <code>asp-controller</code> to <code>asp-page</code>. Not terrible, and I could incrementally move one controller at a time.</p> <p>Finally, I moved the AutoMapper configuration inside this class too. With all this in place, my Razor Page includes:</p> <ul> <li>Page request methods</li> <li>MediatR Request/response models (view models)</li> <li>View rendering</li> <li>Validators</li> <li>Mapping configuration</li> <li>MediatR handlers</li> </ul> <p>I'm not sure how I could make things more cohesive at this point. I will note that standard refactoring techniques still apply - if logic gets complicated in my command handlers, this should be pushed to the domain model to handle.</p> <p>The final weird thing was around my views. The model for a "Razor Page" is the <code>PageModel</code> class, not my original <code>ViewModel</code>. This meant all my views broke. I needed to change all my extensions and tag helpers to include the <code>Data.</code> prefix on my markup, from:</p> <pre><code class="language-cshtml">&lt;form asp-action="Edit"&gt; @Html.ValidationDiv() &lt;input-tag for="Id"/&gt; &lt;div class="form-group"&gt; &lt;label-tag for="Id"/&gt; &lt;div&gt;&lt;display-tag for="Id"/&gt;&lt;/div&gt; &lt;/div&gt; @Html.FormBlock(m =&gt; m.Title) @Html.FormBlock(m =&gt; m.Credits) @Html.FormBlock(m =&gt; m.Department) </code></pre> <p>To: </p> <pre><code class="language-cshtml">&lt;form method="post"&gt; @Html.ValidationDiv() &lt;input-tag for="Data.Id" /&gt; &lt;div class="form-group"&gt; &lt;label-tag for="Data.Id" /&gt; &lt;div&gt;&lt;display-tag for="Data.Id" /&gt;&lt;/div&gt; &lt;/div&gt; @Html.FormBlock(m =&gt; m.Data.Title) @Html.FormBlock(m =&gt; m.Data.Credits) @Html.FormBlock(m =&gt; m.Data.Department) </code></pre> <p>This messed up my rendering, because our intelligent tag helpers use the property navigation to output text. I had to inform our tag helpers to NOT display the "Data" property name (our tag helpers automatically display text for <code>Foo.Bar.Baz</code> or <code>FooBarBaz</code> to "Foo Bar Baz").</p> <p>So for property chains that start with <code>Data.</code>, I remove that text from our labels:</p> <pre><code class="language-c#">Labels .Always .ModifyWith(er =&gt; er.CurrentTag.Text(er.CurrentTag.Text().Replace("Data ", ""))); </code></pre> <p>This could be made more intelligent, only looking for property chains that have an initial "Data" property. But for my sample this is sufficient.</p> <p>I don't test controllers, nor would I test the action methods on my <code>PageModel</code> class. My tests only deal with MediatR requests/responses, so none of my tests needed to change. I could get rid of MediatR in a lot of cases, and make the methods just do the work on my <code>PageModel</code>, but I really like the consistency and simplicity of MediatR's model of "one-model-in, one-model-out".</p> <p>All in all, I like the direction here. Those building with vertical slice architectures will find a very natural fit with Razor Pages. It's still up to you how much you use nested classes, but this truly makes everything related to a request (minus domain model) all part of a single modifiable location, highly cohesive.</p> <p>Nice!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=j-mwNbTDThA:YaxqKo9HR8Y:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=j-mwNbTDThA:YaxqKo9HR8Y:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=j-mwNbTDThA:YaxqKo9HR8Y:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=j-mwNbTDThA:YaxqKo9HR8Y:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=j-mwNbTDThA:YaxqKo9HR8Y:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/j-mwNbTDThA" height="1" width="1" alt=""/> Los Techies Welcomes Derik Whittaker https://lostechies.com/derekgreer/2018/02/21/los-techies-welcomes-derik-whittaker/ Los Techies urn:uuid:adc9a1c8-48ea-3bea-1aa7-320d51db12a1 Wed, 21 Feb 2018 11:00:00 +0000 Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at CodeBetter.com. Welcome, Derik! <p>Los Techies would like to introduce, and extend a welcome to Derik Whittaker. Derik is a C# MVP, member of the AspInsiders group, community speaker, and Pluralsight author. Derik was previously a contributor at <a href="http://codebetter.com/">CodeBetter.com</a>. Welcome, Derik!</p> Ditch the Repository Pattern Already https://lostechies.com/derekgreer/2018/02/20/ditch-the-repository-pattern-already/ Los Techies urn:uuid:7fab2063-d833-60ce-9e46-e4a413ec8391 Tue, 20 Feb 2018 21:00:00 +0000 One pattern that still seems particularly common among .Net developers is the Repository pattern. I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago. <p>One pattern that still seems particularly common among .Net developers is the <a href="https://martinfowler.com/eaaCatalog/repository.html">Repository pattern.</a> I began using this pattern with NHibernate around 2006 and only abandoned its use a few years ago.</p> <p>I had read several articles over the years advocating abandoning the Repository pattern in favor of other suggested approaches which served as a pebble in my shoe for a few years, but there were a few design principles whose application seemed to keep motivating me to use the pattern.  It wasn’t until a change of tooling and a shift in thinking about how these principles should be applied that I finally felt comfortable ditching the use of repositories, so I thought I’d recount my journey to provide some food for thought for those who still feel compelled to use the pattern.</p> <h2 id="mental-obstacle-1-testing-isolation">Mental Obstacle 1: Testing Isolation</h2> <p>What I remember being the biggest barrier to moving away from the use of repositories was writing tests for components which interacted with the database.  About a year or so before I actually abandoned use of the pattern, I remember trying to stub out a class derived from Entity Framework’s DbContext after reading an anti-repository blog post.  I don’t remember the details now, but I remember it being painful and even exploring use of a 3rd-party library designed to help write tests for components dependent upon Entity Framework.  I gave up after a while, concluding it just wasn’t worth the effort.  It wasn’t as if my previous approach was pain-free, as at that point I was accustomed to stubbing out particularly complex repository method calls, but as with many things we often don’t notice friction to which we’ve become accustomed for one reason or another.  I had assumed that doing all that work to stub out my repositories was what I should be doing.</p> <p>Another principle that I picked up from somewhere (maybe the big <a href="http://xunitpatterns.com/">xUnit Test Patterns</a> book? … I don’t remember) that seemed to keep me bound to my repositories was that <a href="http://aspiringcraftsman.com/2012/04/01/tdd-best-practices-dont-mock-others/">you shouldn’t write tests that depend upon dependencies you don’t own</a>.  I believed at the time that I should be writing tests for Application Layer services (which later morphed into discrete dispatched command handlers) and the idea of stubbing out either NHIbernate or Entity Framework violated my sensibilities.</p> <h2 id="mental-obstacle-2-the-dependency-inversion-principle-adherence">Mental Obstacle 2: The Dependency Inversion Principle Adherence</h2> <p>The Dependency Inversion Principle seems to be a source of confusion for many which stems in part from the similarity of wording with the practice of <a href="https://lostechies.com/derickbailey/2011/09/22/dependency-injection-is-not-the-same-as-the-dependency-inversion-principle/">Dependency Injection</a> as well as from the fact that the pattern’s formal definition reflects the platform from whence the principle was conceived (i.e. C++).  One might say that the abstract definition of the Dependency Inversion Principle was too dependent upon the details of its origin (ba dum tss).  I’ve written about the principle a few times (perhaps my most succinct being <a href="https://stackoverflow.com/a/1113937/1219618">this Stack Overflow answer</a>), but put simply, the Dependency Inversion Principle has at its primary goal the decoupling of the portions of your application which define <i>policy</i> from the portions which define <i>implementation</i>.  That is to say, this principle seeks to keep the portions of your application which govern what your application does (e.g. workflow, business logic, etc.) from being tightly coupled to the portions of your application which govern the low level details of how it gets done (e.g. persistence to an Sql Server database, use of Redis for caching, etc.).</p> <p>A good example of a violation of this principle, which I recall from my NHibernate days, was that once upon a time NHibernate was tightly coupled to log4net.  This was later corrected, but at one time the NHibernate assembly had a hard dependency on log4net.  You could use a different logging library for your own code if you wanted, and you could use binding redirects to use a different version of log4net if you wanted, but at one time if you had a dependency on NHibernate then you had to deploy the log4net library.  I think this went unnoticed by many due to the fact that most developers who used NHibernate also used log4net.</p> <p>When I first learned about the principle, I immediately recognized that it seemed to have limited advertized value for most business applications in light of what Udi Dahan labeled<a href="http://udidahan.com/2009/06/07/the-fallacy-of-reuse/"> The Fallacy Of ReUse</a>.  That is to say, <i>properly understood</i>, the Dependency Inversion Principle has as its primary goal the reuse of components and keeping those components decoupled from dependencies which would keep them from being easily reused with other implementation components, but your application and business logic isn’t something that is likely to ever be reused in a different context.  The take away from that is basically that the advertized value of adhering to the Dependency Inversion Principle is really more applicable to libraries like NHibernate, Automapper, etc. and not so much to that workflow your team built for Acme Inc.’s distribution system.  Nevertheless, the Dependency Inversion Principle had a practical value of implementing an architecture style Jeffrey Palermo labeled <a href="http://jeffreypalermo.com/blog/the-onion-architecture-part-1/">the Onion Architecture.</a> Specifically, in contrast to <a href="https://msdn.microsoft.com/en-us/library/ff650258.aspx"> traditional 3-layered architecture models</a> where UI, Business, and Data Access layers precluded using something like <a href="https://msdn.microsoft.com/en-us/library/ff648105.aspx?f=255&amp;MSPPError=-2147217396">Data Access Logic Components</a> to encapsulate an ORM to map data directly to entities within the Business Layer, inverting the dependencies between the Business Layer and the Data Access layer provided the ability for the application to interact with the database while also <i>seemingly </i>abstracting away the details of the data access technology used.</p> <p>While I always saw the fallacy in strictly trying to apply the Dependency Inversion Principle to invert the implementation details of how I got my data from my application layer so that I’d someday be able to use the application in a completely different context, it seemed the academically astute and in vogue way of doing Domain-driven Design at the time, seemed consistent with the GoF’s advice to program to an interface rather than an implementation, and provided an easier way to write isolation tests than trying to partially stub out ORM types.</p> <h2 id="the-catalyst">The Catalyst</h2> <p>For the longest time, I resisted using Entity Framework.  I had become fairly proficient at using NHibernate and I just saw it as plain stupid to use a framework that was years behind NHibernate in features and maturity, especially when it had such a steep learning curve.  A combination of things happened, though.  A lot of the NHibernate supporters (like many within the Alt.Net crowd) moved on to other platforms like Ruby and Node; anything with Microsoft’s name on it eventually seems to gain market share whether it’s better or not; and Entity Framework eventually did seem to mostly catch up with NHibernate in features, and surpassed it in some areas. So, eventually I found it impossible to avoid using Entity Framework which led to me trying to apply the same patterns I’d used before with this newer-to-me framework.</p> <p>To be honest, everything mostly worked, especially for the really simple stuff.  Eventually, though, I began to see little ways I had to modify my abstraction to accommodate differences in how Entity Framework did things from how NHibernate did things.  What I discovered was that, while my repositories allowed my application code to be physically decoupled from the ORM, the way I was using the repositories was in small ways semantically coupled to the framework.  I wish I had kept some sort of record every time I ran into something, as the only real thing I can recall now were motivations with certain design approaches to expose the SaveChanges method for <a href="https://lostechies.com/derekgreer/2015/11/01/survey-of-entity-framework-unit-of-work-patterns/"> Unit of Work implementations</a> I don’t want to make more of the semantic coupling argument against repositories than it’s worth, but observing little places where <a href="https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-abstractions/">my abstractions were leaking</a>, combined with the pebble in my shoe of developers who I felt were far better than me were saying I shouldn’t use them lead me to begin rethinking things.</p> <h2 id="more-effective-testing-strategies">More Effective Testing Strategies</h2> <p>It was actually a few years before I stopped using repositories that I stopped stubbing out repositories.  Around 2010, I learned that you can use Test-Driven Development to achieve 100% test coverage for the code for which you’re responsible, but when you plug your code in for the first time with that team that wasn’t designing to the same specification and not writing any tests at all that things may not work.  It was then that I got turned on to Acceptance Test Driven Development.  What I found was that writing high-level subcutaneous tests (i.e. skipping the UI layer, but otherwise end-to-end) was overall easier, was possible to align with acceptance criteria contained within a user story, provided more assurance that everything worked as a whole, and was easier to get teams on board with.  Later on, I surmised that I really shouldn’t have been writing isolation tests for components which, for the most part, are just specialized facades anyway.  All an isolation test for a facade really says is “did I delegate this operation correctly” and if you’re not careful you can end up just writing a whole bunch of tests that basically just validate whether you correctly configured your mocking library.</p> <p>So, by the time I started rethinking my use of repositories, I had long since stopped using them for test isolation.</p> <h2 id="taking-the-plunge">Taking the Plunge</h2> <p>It was actually about a year after I had become convinced that repositories were unnecessary, useless abstractions that I started working with a new codebase I had the opportunity to steer.  Once I eliminated them from the equation, everything got so much simpler.   Having been repository-free for about two years now, I think I’d have a hard time joining a team that had an affinity for them.</p> <h2 id="conclusion">Conclusion</h2> <p>If you’re still using repositories and you don’t have some other hangup you still need to get over like writing unit tests for your controllers or application services then give the repository-free lifestyle a try.  I bet you’ll love it.</p> Using Manual Mocks to test the AWS SDK with Jest https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/ Maintainer of Code, pusher of bits… urn:uuid:3a424860-3707-7327-2bb1-a60b9f3be47d Tue, 20 Feb 2018 13:56:45 +0000 Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests &#8230; <p><a href="https://derikwhittaker.blog/2018/02/20/using-manual-mocks-to-test-the-aws-sdk-with-jest/" class="more-link">Continue reading <span class="screen-reader-text">Using Manual Mocks to test the AWS SDK with&#160;Jest</span></a></p> <p>Anytime you build Node applications it is highly suggested that your cover your code with tests.  When your code interacts with 3rd party API&#8217;s such as AWS you will most certainly want to mock/stub your calls in order to prevent external calls (if you actually want to do external calls, these are called integration tests not unit tests.</p> <p>If you are using <a href="http://bit.ly/jest-get-started" target="_blank" rel="noopener">Jest</a>, one solution is utilize the built in support for <a href="http://bit.ly/jest-manual-mocks" target="_blank" rel="noopener">manual mocks.</a>  I have found the usage of manual mocks invaluable while testing 3rd party API&#8217;s such as the AWS.  Keep in mind just because I am using manual mocks this will remove the need for using libraries like <a href="http://bit.ly/sinon-js" target="_blank" rel="noopener">SinonJs</a> (a JavaScript framework for creating stubs/mocks/spies).</p> <p>The way that manual mocks work in Jest is as follows (from the Jest website&#8217;s documentation).</p> <blockquote><p><em>Manual mocks are defined by writing a module in a <code>__mocks__/</code> subdirectory immediately adjacent to the module. For example, to mock a module called <code>user</code> in the <code>models</code> directory, create a file called <code>user.js</code> and put it in the <code>models/__mocks__</code> directory. Note that the <code>__mocks__</code> folder is case-sensitive, so naming the directory <code>__MOCKS__</code> will break on some systems. If the module you are mocking is a node module (eg: <code>fs</code>), the mock should be placed in the <code>__mocks__</code> directory adjacent to <code>node_modules</code> (unless you configured <a href="https://facebook.github.io/jest/docs/en/configuration.html#roots-array-string"><code>roots</code></a> to point to a folder other than the project root).</em></p></blockquote> <p>In my case I want to mock out the usage of the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">AWS-SDK</a> for <a href="http://bit.ly/aws-sdk-node" target="_blank" rel="noopener">Node</a>.</p> <p>To do this I created a __mocks__ folder at the root of my solution.  I then created a <a href="http://bit.ly/gist-aws-sdk-js" target="_blank" rel="noopener">aws-sdk.js</a> file inside this folder.</p> <p>Now that I have my mocks folder created with a aws-sdk.js file I am able to consume my manual mock in my jest test by simply referencing the aws-sdk via a <code>require('aws-sdk')</code> command.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('./aws-sdk'); </pre> </div> <p>With declaration of AWS above my code is able to a use the <a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">NPM </a>package during normal usage, or my aws-sdk.js mock when running under the Jest context.</p> <p>Below is a small sample of the code I have inside my aws-sdk.js file for my manual mock.</p> <div class="code-snippet"> <pre class="code-content">const stubs = require('./aws-stubs'); const AWS = {}; // This here is to allow/prevent runtime errors if you are using // AWS.config to do some runtime configuration of the library. // If you do not need any runtime configuration you can omit this. AWS.config = { setPromisesDependency: (arg) =&gt; {} }; AWS.S3 = function() { } // Because I care about using the S3 service's which are part of the SDK // I need to setup the correct identifier. // AWS.S3.prototype = { ...AWS.S3.prototype, // Stub for the listObjectsV2 method in the sdk listObjectsV2(params){ const stubPromise = new Promise((resolve, reject) =&gt; { // pulling in stub data from an external file to remove the noise // from this file. See the top line for how to pull this in resolve(stubs.listObjects); }); return { promise: () =&gt; { return stubPromise; } } } }; // Export my AWS function so it can be referenced via requires module.exports = AWS; </pre> </div> <p>A few things to point out in the code above.</p> <ol> <li>I chose to use the <a href="http://bit.ly/sdk-javascript-promises" target="_blank" rel="noopener">promise</a>s implementation of the listObjectsV2.  Because of this I need to return a promise method as my result on my listObjectsV2 function.  I am sure there are other ways to accomplish this, but this worked and is pretty easy.</li> <li>My function is returning stub data, but this data is described in a separate file called aws-stubs.js which sites along side of my aws-sdk.js file.  I went this route to remove the noise of having the stub data inside my aws-adk file.  You can see a full example of this <a href="http://bit.ly/gist-aws-stub-data" target="_blank" rel="noopener">here</a>.</li> </ol> <p>Now that I have everything setup my tests will no longer attempt to hit the actually aws-sdk, but when running in non-test mode they will.</p> <p>Till next time,</p> Configure Visual Studio Code to debug Jest Tests https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/ Maintainer of Code, pusher of bits… urn:uuid:31928626-b984-35f6-bf96-5bfb71e16208 Fri, 16 Feb 2018 21:33:03 +0000 If you have not given Visual Studio Code a spin you really should, especially if  you are doing web/javascript/Node development. One super awesome feature of VS Code is the ability to easily configure the ability to debug your Jest (should work just fine with other JavaScript testing frameworks) tests.  I have found that most of &#8230; <p><a href="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/" class="more-link">Continue reading <span class="screen-reader-text">Configure Visual Studio Code to debug Jest&#160;Tests</span></a></p> <p>If you have not given <a href="https://code.visualstudio.com/" target="_blank" rel="noopener">Visual Studio Code</a> a spin you really should, especially if  you are doing web/javascript/Node development.</p> <p>One super awesome feature of VS Code is the ability to easily configure the ability to debug your <a href="https://facebook.github.io/jest/" target="_blank" rel="noopener">Jest </a>(should work just fine with other JavaScript testing frameworks) tests.  I have found that most of the time I do not need to actually step into the debugger when writing tests, but there are times that using <code>console.log</code> is just too much friction and I want to step into the debugger.</p> <p>So how do we configure VS Code?</p> <p>First you  will need to install the <a href="https://www.npmjs.com/package/jest-cli" target="_blank" rel="noopener">Jest-Cli</a> NPM package (I am assuming you already have Jest setup to run your tests, if you do not please read the <a href="https://facebook.github.io/jest/docs/en/getting-started.html" target="_blank" rel="noopener">Getting-Started</a> docs).  If you fail to do this step you will get the following error in Code when you try to run the debugger.</p> <p><img data-attachment-id="78" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jestcli/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" data-orig-size="702,75" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestCLI" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640?w=640" class="alignnone size-full wp-image-78" src="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640" alt="JestCLI" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=640 640w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png?w=300 300w, https://derikwhittaker.files.wordpress.com/2018/02/jestcli.png 702w" sizes="(max-width: 640px) 100vw, 640px" /></p> <p>After you have Jest-Cli installed you will need to configure VS Code for debugging.  To do this open up the configuration by clicking Debug -&gt; Open Configurations.  This will open up a file called launch.json.</p> <p>Once launch.json is open add the following configuration</p> <div class="code-snippet"> <pre class="code-content"> { "name": "Jest Tests", "type": "node", "request": "launch", "program": "${workspaceRoot}/node_modules/jest-cli/bin/jest.js", "stopOnEntry": false, "args": ["--runInBand"], "cwd": "${workspaceRoot}", "preLaunchTask": null, "runtimeExecutable": null, "runtimeArgs": [ "--nolazy" ], "env": { "NODE_ENV": "development" }, "console": "internalConsole", "sourceMaps": false, "outFiles": [] } </pre> </div> <p>Here is a gist of a working <a href="https://gist.github.com/derikwhittaker/331d4a5befddf7fc6b2599f1ada5d866" target="_blank" rel="noopener">launch.json</a> file.</p> <p>After you save the file you are almost ready to start your debugging.</p> <p>Before you can debug you will want to open the debug menu (the bug icon on the left toolbar).   This will show a drop down menu with different configurations.  Make sure &#8216;Jest Test&#8217; is selected.</p> <p><img data-attachment-id="79" data-permalink="https://derikwhittaker.blog/2018/02/16/configure-visual-studio-code-to-debug-jest-tests/jesttest/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" data-orig-size="240,65" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="JestTest" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640?w=240" class="alignnone size-full wp-image-79" src="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=640" alt="JestTest" srcset="https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png 240w, https://derikwhittaker.files.wordpress.com/2018/02/jesttest.png?w=150 150w" sizes="(max-width: 240px) 100vw, 240px" /></p> <p>If you have this setup correctly you should be able to set breakpoints and hit F5.</p> <p>Till next time,</p> On Migrating Los Techies to Github Pages https://lostechies.com/derekgreer/2018/02/16/on-migrating-lostechies-to-github-pages/ Los Techies urn:uuid:74de4506-44e0-f605-61cb-8ffe972f6787 Fri, 16 Feb 2018 20:00:00 +0000 We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available here that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation. <p>We recently migrated Los Techies from a multi-site installation of WordPress to Github Pages, so I thought I’d share some of the more unique portions of the process. For a straightforward guide on migrating from WordPress to Github Pages, Tomomi Imura has published an excellent guide available <a href="https://girliemac.com/blog/2013/12/27/wordpress-to-jekyll/">here</a> that covers exporting content, setting up a new Jekyll site (what Github Pages uses as its static site engine), porting the comments, and DNS configuration. The purpose of this post is really just to cover some of the unique aspects that related to our particular installation.</p> <h2 id="step-1-exporting-content">Step 1: Exporting Content</h2> <p>Having recently migrated <a href="http://aspiringcraftsman.com">my personal blog</a> from WordPress to Github Pages using the aforementioned guide, I thought the process of doing the same for Los Techies would be relatively easy. Unfortunately, due to the fact that we had a woefully out-of-date installation of WordPress, migrating Los Techies proved to be a bit problematic. First, the WordPress to Jekyll Exporter plugin wasn’t compatible with our version of WordPress. Additionally, our installation of WordPress couldn’t be upgraded in place for various reasons. As a result, I ended up taking the rather labor-intensive path of exporting each author’s content using the default WordPress XML export and then, for each author, importing into an up-to-date installation of WordPress using the hosting site with which I previously hosting my personal blog, exporting the posts using the Jekyll Exporter plugin, and then deleting the posts in preparation for the next iteration. This resulted in a collection of zipped, mostly ready posts for each author.</p> <h2 id="step-2-configuring-authors">Step 2: Configuring Authors</h2> <p>Our previous platform utilized the multi-site features of WordPress to facilitate a single site with multiple contributors. By default, Jekyll looks for content within a special folder in the root of the site named _posts, but there are several issues with trying to represent multiple contributors within the _posts folder. Fortunately Jekyll has a feature called Collections which allows you to set up groups of posts which can each have their own associated configuration properties. Once each of the author’s posts were copied to corresponding collection folders, a series of scripts were written to create author-specific index.html, archive.html, and tags.html files which are used by a custom post layout. Additionally, due to the way the WordPress content was exported, the permalinks generated for each post did not reflect the author’s subdirectory, so another script was written to strip out all the generated permalinks.</p> <h2 id="step-3-correcting-liquid-errors">Step 3: Correcting Liquid Errors</h2> <p>Jekyll uses a language called Liquid as its templating engine. Once all the content was in place, all posts which contained double curly braces were interpreted as Liquid commands which ended up breaking the build process. For that, each offending post had to be edited to wrap the content in Liquid directives {% raw %} … {% endraw %} to keep the content from being interpreted by the Liquid parser. Additionally, there were a few other odd things which were causing issues (such as posts with non-breaking space characters) for which more scripts were written to modify the posts to non-offending content.</p> <h2 id="step-4-enabling-disqus">Step 4: Enabling Disqus</h2> <p>The next step was to get Disqus comments working for the posts. By default, Disqus will use the page URL as the page identifier, so as long as the paths match then enabling Disqus should just work. The WordPress Disqus plugin we were using utilized a unique post id and guid as the Disqus page identifier, so the Disqus javascript had to be configured to use these properties. These values were preserved by the Jekyll exporter, but unfortunately the generated id property in the Jekyll front matter was getting internally overridden by Jekyll so another script had to be written to modify all the posts to rename the properties used for these values. Properties were added to the Collection configuration in the main _config.yml to designate the Disqus shortname for each author and allow people to toggle whether disqus was enabled or disabled for their posts.</p> <h2 id="step-5-converting-gists">Step 5: Converting Gists</h2> <p>Many authors at Los Techies used a Gist WordPress plugin to embed code samples within their posts. Github Pages supports a jekyll-gist plugin, so another script was written to modify all the posts to use Liquid syntax to denote the gists. This mostly worked, but there were still a number of posts which had to be manually edited to deal with different ways people were denoting their gists. In retrospect, it would have been better to use JavaScript rather than the Jekyll gist plugin due to the size of the Los Techies site. Every plugin you use adds time to the overall build process which can become problematic as we’ll touch on next.</p> <h2 id="step-6-excessive-build-time-mitigation">Step 6: Excessive Build-time Mitigation</h2> <p>The first iteration of the conversion used the Liquid syntax for generating the sidebar content which lists recent site-wide posts, recent author-specific posts, and the list of contributing authors. This resulted in extremely long build times, but it worked and who cares once the site is rendered, right? Well, what I found out was that Github has a hard cut off of 10 minutes for Jekyll site builds. If your site doesn’t build within 10 minutes, the process gets killed. At first I thought “Oh no! After all this effort, Github just isn’t going to support a site our size!” I then realized that rather than having every page loop over all the content, I could create a Jekyll template to generate JSON content one time and then use JavaScript to retrieve the content and dynamically generate the sidebar DOM elements. This sped up the build significantly, taking the build from close to a half-hour to just a few minutes.</p> <h2 id="step-8-converting-wordpress-uploaded-content">Step 8: Converting WordPress Uploaded Content</h2> <p>Another headache that presented itself is how WordPress represented uploaded content. Everything that anyone had ever uploaded to the site for images and downloads used within their posts were stored in a cryptic folder structure. Each folder had to be interrogated to see which files contained therein matched what author, the folder structure had to be reworked to accommodate the nature of the Jekyll site, and more scripts had to be written to edit everyone’s posts to change paths to the new content. Of course, the scripts only worked for about 95% of the posts, a number of posts had to be edited manually to fix things like non-printable characters being used in file names, etc.</p> <h2 id="step-9-handling-redirects">Step 9: Handling Redirects</h2> <p>The final step to get the initial version of the conversion complete was to handle redirects which were formally being handled by .httpacess. The Los Techies site started off using Community Server prior to migrating to WordPress and redirects were set up using .httpaccess to maintain the paths to all the previous content locations. Github Pages doesn’t support .httpaccess, but it does support a Jekyll redirect plugin. Unfortunately, it requires adding a redirect property to each post requiring a redirect and we had several thousand, so I had to write another script to read the .httpaccess file and figure out which post went with each line. Another unfortunate aspect of using the Jekyll redirect plugin is that it adds overhead to the build time which, as discussed earlier, can become an issue.</p> <h2 id="step-10-enabling-aggregation">Step 10: Enabling Aggregation</h2> <p>Once the conversion was complete, I decided to dedicate some time to figuring out how we might be able to add the ability to aggregate posts from external feeds. The first step to this was finding a service that could aggregate feeds together. You might think there would be a number of things that do this, and while I did find at least a half-dozen services, there were only a couple I found that allowed you to maintain a single feed and add/remove new feeds while preserving the aggregated feed. Most seemed to only allow you to do a one-time aggregation. For this I settled on a site named <a href="http://feed.informer.com">feed.informer.com</a>. Next, I replaced the landing page with JavaScript that dynamically built the site from the aggregated feed along with replacing the recent author posts section that did the same and a special external template capable of making an individual post look like it’s actually hosted on Los Techies. The final result was a site that displays a mixture of local content along with aggregated content.</p> <h2 id="conclusion">Conclusion</h2> <p>Overall, the conversion was way more work than I anticipated, but I believe worth the effort. The site is now much faster than it used to be and we aren’t having to pay a hosting service to host our site.</p> My Microservices FAQ https://jimmybogard.com/my-microservices-faq/ Jimmy Bogard urn:uuid:08003504-4dfc-f15b-44ad-af46348b05b7 Thu, 15 Feb 2018 19:30:34 +0000 <p>Mainly because I get asked all the time about microservices and I'm tired of having to remember on the spot:</p> <h3 id="whatisamicroservice">What is a microservice?</h3> <p>A microservice is a service with a design focus towards the smallest autonomous boundary.</p> <h3 id="whatisaservice">What is a service?</h3> <p>(From <a href="https://gotocon.com/amsterdam-2016/presentation/Messaging%20and%20Microservices">Clemens</a>) A service is software that:</p> <ul> <li>is</li></ul> <p>Mainly because I get asked all the time about microservices and I'm tired of having to remember on the spot:</p> <h3 id="whatisamicroservice">What is a microservice?</h3> <p>A microservice is a service with a design focus towards the smallest autonomous boundary.</p> <h3 id="whatisaservice">What is a service?</h3> <p>(From <a href="https://gotocon.com/amsterdam-2016/presentation/Messaging%20and%20Microservices">Clemens</a>) A service is software that:</p> <ul> <li>is owned, built, and run by an organization</li> <li>is responsible for holding, processing, and/or distributing particular kinds of information within the scope of a system</li> <li>can be built, deployed, and run independently, meeting defined operational objectives</li> <li>communicates with consumers and other services, presenting information using conventions and/or contract assurances</li> <li>protects itself against unwanted access, and its information against loss</li> <li>handles failure conditions such that failures cannot lead to information corruption</li> </ul> <p>In short, a service is software that exhibits autonomy. It's built, deployed, and run independently from other services. It manages its own information and how it communicates to/from the outside world. It handles failures so that its information isn't lost.</p> <h3 id="domicroservicesrequirecontainersdocker">Do microservices require containers/Docker?</h3> <p>No.</p> <h3 id="domicroservicesrequiregonodejselixirnetcore">Do microservices require Go/node.js/Elixir/.NET Core?</h3> <p>No.</p> <h3 id="domicroservicesrequireanyspecifictechnologyorstack">Do microservices require any specific technology or stack?</h3> <p>No.</p> <h3 id="iputmyappinacontainerisitamicroservice">I put my app in a container, is it a microservice?</h3> <p>Containers have no bearing on whether or not software is a microservice or not.</p> <h3 id="whydopeopletalkaboutmicroservicesalongwithcontainersserverlesspaas">Why do people talk about microservices along with containers/serverless/PaaS?</h3> <p>Those are technologies that can help remove technical barriers for the size of services.</p> <h3 id="whatisamonolith">What is a monolith?</h3> <p>A monolith is software whose design, information model, and interface combine multiple competing and interfering domains into one single monolithic application and data model. There is no longer consensus from users and designers of the software on terms, modeling, information design, or interface. The coupling between competing models makes future changes to the system difficult or impossible.</p> <h3 id="ifimnotdoingmicroservicesamibuildingmonoliths">If I'm not doing microservices, am I building monoliths?</h3> <p>No, not necessarily. You can build services without a focus on the smallest but still autonomous boundary.</p> <h3 id="ifihaveasingleapplicationanddatabaseisitamonolith">If I have a single application and database, is it a monolith?</h3> <p>No, not necessarily. You may have a single software system whose model is cohesive enough that there aren't competing/interfering domains. If that application meets the defined operational and business needs of the organization, then it does have the negative indicators of a monolith.</p> <p>Similarly, not all data models are "anemic domain models". A domain model is only anemic if it's a data model masquerading as a domain model.</p> <h3 id="canamicroservicebetoosmall">Can a microservice be too small?</h3> <p>Yes, if it no longer meets the characteristics of a service, the software is no longer a service, but a module, function, or data store.</p> <h3 id="canamicroservicebetoobig">Can a microservice be too big?</h3> <p>A microservice is a service with a design focus on the smallest <em>possible</em> autonomous boundary. What is possible and desirable is highly contextual on the domain, people, technology, and goals of the business.</p> <h3 id="shouldistartwithmicroservicesoramonolith">Should I start with microservices or a monolith?</h3> <p>Microservices focus on the smallest autonomous boundary for a service. If the domain is ambiguous enough or business goals volatile enough, you may wind up building modules that can be independently <em>deployed</em> but not <em>run</em> because they depend on other modules to function.</p> <p>Perhaps a better question is "should I start with a single application or modular architecture?" The answer is highly dependent on the team, business, and domain maturity.</p> <h3 id="howshouldmicroservicescommunicate">How should microservices communicate?</h3> <p>In a manner and medium that does not violate the fundamental definition of a service. If a communication protocol introduces coupling that violates the design traits of a service, then you must re-evaluate that protocol, or re-evaluate your service boundaries.</p> <p>As an example, services that solely communicate via RPC introduce process (and likely temporal coupling) such that the services are no longer autonomous and independent services, but modules within a larger service boundary.</p> <h3 id="shouldidohaveasinglerepositorymonorepoorrepositoryperservice">Should I do have a single repository (monorepo) or repository per service?</h3> <p>This depends on your delivery pipeline. If you can own, build, deploy, and run your service in a single source control repository independent of others, this may be an option. You may decide to organize repositories around systems, applications, organizations, services, modules, etc. Repository boundaries are (somewhat) orthogonal to service boundaries. But it may make your life easier to just have repositories per service.</p> <p>If you feel forced to build a single repository to host your services to enable changes easier, then you don't have microservices, but modules. Your services aren't independent or loosely coupled to other services, and therefore aren't services.</p> <h3 id="shouldidorestfulservicesorasyncdurablemessaging">Should I do RESTful services or async, durable messaging?</h3> <p>These choices are orthogonal to microservices. Choose what makes sense for the communication protocol you are designing and the constraints you have, keeping in mind the fundamental definition of a service.</p> <p>I typically chose event-driven architectures as inter-service communication, but this can be achieved via a variety of transport protocols, and is never a universal decision. Avoid making universal decisions such as "everything must use RESTful web services", especially because most "RESTful APIs" are merely RPC-over-HTTP.</p> <h3 id="shouldiusestreamsorqueuesbrokerstodispatcheventstoothers">Should I use streams or queues/brokers to dispatch events to others?</h3> <p>You can do both, as both have quite different designs and constraints, benefits and drawbacks. Fundamentally, though, just as your service should avoid strongly coupling yourself to others, you should avoid directly exposing internal state changes as external events. Contracts exposed to outside your service boundary will change at a different pace and reason than internal state/communication.</p> <p>For example, exposing your event-sourced aggregate's events directly to outside subscribers is not much different than giving out a database connection string to a read-only view of your database.</p> <h3 id="shouldidomicroservices">Should I do microservices?</h3> <p>It depends, building services as small as their autonomy allows is a different way of building services, that usually results in more of them. It affects more than just a developer, to the entire value delivery chain from concept to production. It usually involves imbuing a DevOps culture and mentality, but the ramifications stretch upstream as well.</p> <p>The bigger question to ask is - is our software delivery value stream delivering its value at the speed the business needs? And if not, is the cause of our bottleneck in delivery that we don't have small enough services to align agility more closely? If so, then microservices are a good choice to consider.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Fx6QT-_yhyQ:1otuY19ic4s:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Fx6QT-_yhyQ:1otuY19ic4s:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Fx6QT-_yhyQ:1otuY19ic4s:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Fx6QT-_yhyQ:1otuY19ic4s:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Fx6QT-_yhyQ:1otuY19ic4s:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/Fx6QT-_yhyQ" height="1" width="1" alt=""/> Containers - What Are They Good For? Local Dependencies https://jimmybogard.com/containers-what-are-they-good-for-local-dependencies/ Jimmy Bogard urn:uuid:8d3c1b0a-32a8-23c0-e706-c2117b10d923 Wed, 14 Feb 2018 21:15:05 +0000 <blockquote> <p>Containers, huh, good god <br> What is it good for? <br> Local Dependencies! <br> - Edwin Starr (also disputed)</p> </blockquote> <p>In the <a href="https://jimmybogard.com/containers-what-is-it-good-for/">last post</a>, I walked through our typical development pipeline, from local dev to production:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0019.png" alt="Dev Workflow"></p> <p>Now for most of our developers, when we start a new project, we can just continue to work</p> <blockquote> <p>Containers, huh, good god <br> What is it good for? <br> Local Dependencies! <br> - Edwin Starr (also disputed)</p> </blockquote> <p>In the <a href="https://jimmybogard.com/containers-what-is-it-good-for/">last post</a>, I walked through our typical development pipeline, from local dev to production:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0019.png" alt="Dev Workflow"></p> <p>Now for most of our developers, when we start a new project, we can just continue to work on our existing host machine. The development dependencies don't change <em>that</em> much from project to project, and we're on projects typically for 6-12 months. We're not typically switching from project to project, nor do we work on multiple projects as the norm.</p> <p>So our "normal" dev dependencies are:</p> <ul> <li>Some recent version Visual Studio (hopefully the latest)</li> <li>Some recent version and any SKU of SQL Server (typically SQL Express)</li> </ul> <p>The most we run into are connection strings being different because of different instance names. However, this isn't universal, and we sometimes have wonky VPN requirements and such, so a developer might do something like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0020.png" alt="Virtual Machine"></p> <p>The entire development environment is in a virtual machine, so that its dependencies don't get mucked with the host, and vice-versa. The downside to this approach is maintaining these images is a bit of a pain - Visual Studio is large, and so is SQL Server, so it's not uncommon that these VMs get into the tens, even over 100 GB in size.</p> <p>Then we have more exotic scenarios, such as dependencies that don't run too great on Windows, or are a pain to set up, or can't play nice with other versions. For this, we can look at Docker as a means to isolate our dependencies:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0021.png" alt="ELK stack"></p> <p>In this case, we've got the <a href="https://www.elastic.co/elk-stack">Elastic Stack</a> we need to run for local development, but instead of installing it, we run it locally in Docker.</p> <h3 id="dockercomposeforlocaldependencies">Docker compose for local dependencies</h3> <p>When we run our application like this, we're still running the main app on our host machine, but it connects to a set of Docker images for the dependencies. <a href="https://docs.docker.com/compose/">Docker-compose</a> is a great tool for multi-container setups. If we wanted to just run one image, it's too bad with regular Docker, but docker-compose makes it dirt simple to do so.</p> <p>For example, in my Respawn project, I support a number of different databases, and I don't really want to install them all. Instead, I have a simple <code>docker-compose.yml</code> file:</p> <pre><code class="language-yaml">version: '3' services: postgres-db: image: postgres restart: always environment: POSTGRES_USER: docker POSTGRES_PASSWORD: Password12! ports: - 8081:5432 mysql-db: image: mysql restart: always environment: MYSQL_ROOT_PASSWORD: testytest ports: - 8082:3306 oracle: image: sath89/oracle-12c restart: always ports: - 8080:8080 - 1521:1521 adminer: image: adminer restart: always ports: - 8083:8083 </code></pre> <p>I pull in Postgres, MySQL, and Oracle (as well as a simple Admin UI), and expose these services externally via some well-known ports. When I run my build, I start these services:</p> <pre><code>docker-compose up -d </code></pre> <p>Now when my build runs, the tests connect to these services through the configured ports.</p> <p>So this is a big win for us - we don't have to install any 3rd-party services on our dev machines. Instead, we pull down (or build our own) Docker images and run those.</p> <h3 id="whataboutnormalscenarios">What about normal scenarios?</h3> <p>The above works great for "exotic" dependencies outside our normal toolbelt, like databases, queues, and services. But most systems we build are just on top of SQL Server, which most everyone has installed already. If not SQL Express, then SQL Local DB.</p> <p>But I'm in an experimenting mood, what if we did this?</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0022.png" alt=""></p> <p>This is possible today - Microsoft have their full-blown SQL Server images on Docker Hub, both in the form of <a href="https://hub.docker.com/r/microsoft/mssql-server-windows-developer/">SQL Server Developer Edition</a>, and <a href="https://hub.docker.com/r/microsoft/mssql-server-windows-express/">Express</a>. There is a Linux image, but we don't run SQL Server on Linux in production so I'm not sure if there would be a difference locally or not.</p> <p>So this <em>does</em> work, we can ignore whatever SQL a user might have installed and just say "run the database through Docker".</p> <p>There are some downsides to this approach, however:</p> <ul> <li>It's running in Docker, not a Windows Service. You have to always run Docker to run your database</li> <li>The images are huuuuuuuuge</li> </ul> <p>Huge images are just a thing for Windows-based containers (unless you're running Nanoserver). Take a look at the image sizes for SQL Express:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0023.png" alt="Yuuuuuuuge"></p> <p><a href="https://hub.docker.com/r/microsoft/mssql-server-linux/">SQL Server on Linux</a> containers are smaller, but still large, around 450MB. Compare this to Postgres on Alpine Linux:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0024.png" alt="Small Postgres"></p> <p>So the smallest SQL Linux image is over 30x bigger than the smallest Postgres image, and the smallest <em>Windows</em> SQL image is over 350x bigger. That's....ridiculous.</p> <p>What does this actually mean for us locally? Docker images and layers are cached, so once you've got the base images that these SQL images are based on downloaded once, subsequent Docker pulls will only pull the additional images. For us, this would be Windows Server Core images (not Nanoserver), so you might already have this downloaded. Or not.</p> <p>Startup time won't really be affected, it's about the same to start a Docker SQL container as it is to start say the SQL Server local service on the host machine, but that's still seconds.</p> <p>Because of this, for our normal development experience, it's not really any better nor solving a pain that we have.</p> <h3 id="takingitupto11">Taking it up to 11</h3> <p>I talked to quite a few developers who do a lot of local development in Docker, and it seemed to come down to that they are Node developers, and one thing Node is a bit lousy about is global dependencies. If you run <code>npm install -g</code>, then depending on what's already in your cache, or what version of <code>node</code> or <code>npm</code> running, you'll get different results.</p> <p>And I also have talked to folks that have run their <em>entire</em> development environment inside of containers, so it looks something like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0025.png" alt=""></p> <p>The host machine might just include some caches, but we could run <em>everything</em> inside containers. Development environment, editors, our source, everything. This seemed like a great way to avoid Node/Ruby pollution on a host machine, but this person also did things like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0026.png" alt=""></p> <p>For a normal .NET developer, I'm likely not using vim all day, and I want to do things like use a debugger. It's unlikely then that I'd move all of my development environment into a container.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=7kdJVZTSa2A:JqfFt_g_tTE:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=7kdJVZTSa2A:JqfFt_g_tTE:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=7kdJVZTSa2A:JqfFt_g_tTE:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=7kdJVZTSa2A:JqfFt_g_tTE:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=7kdJVZTSa2A:JqfFt_g_tTE:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/7kdJVZTSa2A" height="1" width="1" alt=""/> Going Async with Node AWS SDK with Express https://derikwhittaker.blog/2018/02/13/going-async-with-node-aws-sdk-with-express/ Maintainer of Code, pusher of bits… urn:uuid:d4750cda-8c6e-8b2f-577b-78c746ee6ebd Tue, 13 Feb 2018 13:00:30 +0000 When building applications in Node/Express you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The 'old school' way was to use call backs, which often led to callback hell.  Than came along Promises which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for async/await was introduced and this really has cleaned up and enhanced the readability of your code. <p>When building applications in <a href="https://nodejs.org/en/" target="_blank" rel="noopener">Node</a>/<a href="http://expressjs.com/" target="_blank" rel="noopener">Express </a>you will quickly come to realize that everything is done asynchronously . But how you accomplish these tasks async can vary.  The &#8216;old school&#8217; way was to use call backs, which often led to <a href="http://callbackhell.com/" target="_blank" rel="noopener">callback hell</a>.  Than came along <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise">Promises</a> which we thought was going to solve all the worlds problems, turned out they helped, but did not solve everything.  Finally in Node 8.0 (ok, you could use them in Node 7.6) the support for <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function" target="_blank" rel="noopener">async</a>/<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await" target="_blank" rel="noopener">await</a> was introduced and this really has cleaned up and enhanced the readability of your code.</p> <p>Having the ability to use async/await is great, and is supported out of the box w/ Express.  But what do you do when you using a library which still wants to use promises or callbacks? The case in point for this article is <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">AWS Node SDK</a>.</p> <p>By default if you read through the AWS SDK documentation the examples lead you to believe that you need to use callbacks when implementing the SDK.  Well this can really lead to some nasty code in the world of Node/Express.  However, as of <a href="https://aws.amazon.com/blogs/developer/support-for-promises-in-the-sdk/" target="_blank" rel="noopener">v2.3.0</a> of the AWS SDK there is support for Promises.  This is much cleaner than using callbacks, but still poses a bit of an issue if you want to use async/await in your Express routes.</p> <p>However, with a bit of work you can get your promise based AWS calls to play nicely with your async/await based Express routes.  Lets take a look at how we can accomplish this.</p> <p>Before you get started I am going to make a few assumptions.</p> <ol> <li>You already have a Node/Express application setup</li> <li>You already have the AWS SDK for Node installed, if not read <a href="https://aws.amazon.com/sdk-for-node-js/" target="_blank" rel="noopener">here</a></li> </ol> <p>The first thing we are going to need to do is add reference to our AWS SDK and configure it to use promises.</p> <div class="code-snippet"> <pre class="code-content">const AWS = require('aws-sdk'); AWS.config.setPromisesDependency(null); </pre> </div> <p>After we have our SDK configured we can implement our route handler.  In my example here I am placing all the logic inside my handler.  In a real code base I would suggest better deconstruction of this code into smaller parts.</p> <div class="code-snippet"> <pre class="code-content">const express = require('express'); const router = express.Router(); const s3 = new AWS.S3(); router.get('/myRoute', async (req, res) =&gt; { const controller = new sitesController(); const params = req.params; const params = { Bucket: "bucket_name_here" }; let results = {}; var listPromise = s3.listObjects(params).promise(); listPromise.then((data) =&gt; { results = data; }); await Promise.all([listPromise]); res.json({data: results }) }) module.exports = router; </pre> </div> <p>Lets review the code above and call out a few important items.</p> <p>The first thing to notice is the addition of the <code>async</code> keyword in my route handler.  This is what allows us to use async/await in Node/Express.</p> <p>The next thing to look at is how I am calling the s3.listObjects.  Notice I am <strong>NOT </strong>providing a callback to the method, but instead I am chaining with .promise().  This is what instructs the SDK to use promises vs callbacks.  Once I have my callback I chain a &#8216;then&#8217; in order to handle my response.</p> <p>The last thing to pay attention to is the line with <code>await Promise.All([listPromise]);</code> This is the magic forces our route handler to not return prior to the resolution of all of our Promises.  Without this your call would exit prior to the listObjects call completing.</p> <p>Finally, we are simply returning our data from the listObjects call via <code>res.json</code> call.</p> <p>That&#8217;s it, pretty straight forward, once you learn that the AWS SDK supports something other than callbacks.</p> <p>Till next time,</p> Package and Publish a Node site to AWS ElasticBeanstalk with Gulp https://derikwhittaker.blog/2018/02/11/package-and-publish-a-node-site-to-aws-elasticbeanstalk-with-gulp/ Maintainer of Code, pusher of bits… urn:uuid:17e29206-a481-7747-bbc4-7b631499a6bb Sun, 11 Feb 2018 13:33:08 +0000 Say you are building a Node application which you want to host in AWS Elastic Beanstalk, but how do you automate this process?  I mean sure, you could just open up the AWS console and manually upload your files, but lets be honest that is a royal pain in the ass.  What we really want &#8230; <p><a href="https://derikwhittaker.blog/2018/02/11/package-and-publish-a-node-site-to-aws-elasticbeanstalk-with-gulp/" class="more-link">Continue reading <span class="screen-reader-text">Package and Publish a Node site to AWS ElasticBeanstalk with&#160;Gulp</span></a></p> <p>Say you are building a <a href="http://bit.ly/nodejs-org" target="_blank" rel="noopener">Node </a>application which you want to host in AWS Elastic Beanstalk, but how do you automate this process?  I mean sure, you could just open up the AWS console and manually upload your files, but lets be honest that is a royal pain in the <del>ass. </del> What we really want to do is setup some sort of automate to make everything easier.  In this post we are going to walk though how to use <a href="http://bit.ly/gulp-getting-started" target="_blank" rel="noopener">Gulp </a>to a publish your code to AWS S3 and then consume the package in <a href="http://bit.ly/EB-GettingStarted" target="_blank" rel="noopener">Elastic Beanstalk</a> (EB).</p> <p>In this post we are going to learn how to use Gulp to create, upload and deploy a Node application.  The flow will be as below</p> <p><img data-attachment-id="68" data-permalink="https://derikwhittaker.blog/2018/02/11/package-and-publish-a-node-site-to-aws-elasticbeanstalk-with-gulp/nodetoawseb/" data-orig-file="https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=640" data-orig-size="2284,514" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="NodeToAWSEB" data-image-description="" data-medium-file="https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=640?w=300" data-large-file="https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=640?w=640" class="alignnone size-full wp-image-68" src="https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=640" alt="NodeToAWSEB" srcset="https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=640 640w, https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=1280 1280w, https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=150 150w, https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=300 300w, https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=768 768w, https://derikwhittaker.files.wordpress.com/2018/02/nodetoawseb.png?w=1024 1024w" sizes="(max-width: 640px) 100vw, 640px" /></p> <p>Before you get started I am going to make a few assumptions.</p> <ol> <li>You already know basic Gulp.js usage, if not details on this <a href="http://bit.ly/gulp-getting-started" target="_blank" rel="noopener">here</a></li> <li>You already have a React application, if not details on this <a href="http://bit.ly/react-tutorial-intro" target="_blank" rel="noopener">here</a></li> <li>You already have an AWS account setup, if not details on this <a href="http://bit.ly/AWS-GettingStarted" target="_blank" rel="noopener">here</a> <ul> <li>You will also need to make sure you have your CLI configured w/ credentials.  See <a href="http://bit.ly/AWS-Configure-CLI" target="_blank" rel="noopener">here </a>for details</li> </ul> </li> <li>You already have a build npm task in your package.json file, if not look <a href="http://bit.ly/gist-react-start" target="_blank" rel="noopener">here</a></li> <li>You already have the EB CLI installed, if not look <a href="http://bit.ly/AWS-EB-CLI" target="_blank" rel="noopener">here</a></li> </ol> <p>To get started, the first thing we are going to need to install a few NPM packages</p> <ul> <li><a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">aws-sdk</a></li> <li><a href="http://bit.ly/npm-gulp-run" target="_blank" rel="noopener">gulp-run</a></li> <li><a href="http://bit.ly/NPM-Gulp-Zip" target="_blank" rel="noopener">gulp-zip</a></li> <li><a href="http://bit.ly/npm-run-sequence" target="_blank" rel="noopener">run-sequence</a></li> </ul> <div class="code-snippet"> <pre class="code-content">npm install --save-dev aws-sdk gulp-run gulp-zip run-sequence </pre> </div> <p>After you have the npm packages installed we will need to start working on your gulpfile.js changes.</p> <p>Add the reference to the package we added</p> <div class="code-snippet"> <pre class="code-content">const run = require('gulp-run'); const zip = require('gulp-zip'); const AWS = require('aws-sdk'); const runSequence = require('run-sequence'); </pre> </div> <p>After we reference our packages we need to create an instance of the S3 object, this will be used to push the files into our S3 bucket.</p> <div class="code-snippet"> <pre class="code-content">const s3 = new AWS.S3({apiVersion: '2006-03-01'}); </pre> </div> <p>Now that we have all the prerequisite stuff out of the way, time to get down to business.</p> <p>We are going to need to create 5 different tasks in our gulp file.</p> <ol> <li>deploy -&gt; This will be used to kick off the entire process</li> <li>zip-build -&gt; We want to zip our entire solution into a single file for pushing to S3 -&gt; EB</li> <li>copy-to-bucket -&gt; Will copy our zip file to S3 for store-and-forward to EB</li> <li>publish-version-to-eb -&gt; Will create the actual version in EB for usage</li> <li>publish-environment-to-eb -&gt; Will publish the provisioned version we created in the prior step</li> </ol> <div class="code-snippet"> <pre class="code-content">gulp.task('deploy', deployTest); gulp.task('copy-to-bucket', copyToS3Bucket); gulp.task('publish-version-to-eb', publishVersionToEB); gulp.task('publish-environment-to-eb', publishEnvironmentToEB); gulp.task('zip-build', zipBuild) function copyToS3Bucket() { const options = s3Options.active; const packageNameAndLocation = "./deploy/deploymentPackageName.zip"; const s3BucketPath = "s3://your-s3-bucket-name"; return run("aws s3 cp " + packageNameAndLocation + " " + s3BucketPath ).exec(); } function deploy(done) { runSequence('zip-build', 'copy-to-bucket', 'publish-version-to-eb', 'publish-environment-to-eb', done); } function publishVersionToEB(){ const createAppCommand = "aws elasticbeanstalk create-application-version --application-name application-name-from-eb --version-label deployment-version-label --source-bundle S3Bucket=your-bucket-name,S3Key=full-name-of-deploy-zip.zip"; return run(createAppCommand).exec(); } function publishEnvironmentToEB(){ const updateEnvironmentCommand = "aws elasticbeanstalk update-environment --environment-name your-environemnt-name-in-eb --version-label your-version-name-just-created"; return run(updateEnvironmentCommand).exec(); } function zipBuild(){ const options = s3Options.active; // when zipping the files we want to grab all files and sub folders // we want to omit our deploy and test files (no need for those in EB) return gulp.src(["./**/*.*", "!./node_modules/**/*.*", "!./deploy/**/*.*", "!./test/**/*.*" ]) .pipe(zip(options.deployPackageName)) .pipe(gulp.dest("deploy")); } </pre> </div> <p>After you have implemented the changes above you can run gulp on the command line and watch the magic unfold.</p> <div class="code-snippet"> <pre class="code-content">gulp deploy </pre> </div> <p>If you would like to see a full working copy of my gulpfile.js checkout this <a href="https://gist.github.com/derikwhittaker/a723b61ed0d716913239c78c52d8c898" target="_blank" rel="noopener">gist</a>.</p> Package and Publish React Site to AWS S3 Bucket with Gulp https://derikwhittaker.blog/2018/02/10/package-and-publish-react-site-to-aws-s3-bucket-with-gulp/ Maintainer of Code, pusher of bits… urn:uuid:e1deed6f-a742-2948-9abc-044751bba6b5 Sat, 10 Feb 2018 20:18:22 +0000 There are many ways you can get code into an S3 bucket, especially if you have a build/deploy server.   But what do you do when you do not have one?  One possible solution is to use Gulp to build/deploy the React application and publish to an S3 bucket. Before you get started I am going &#8230; <p><a href="https://derikwhittaker.blog/2018/02/10/package-and-publish-react-site-to-aws-s3-bucket-with-gulp/" class="more-link">Continue reading <span class="screen-reader-text">Package and Publish React Site to AWS S3 Bucket with&#160;Gulp</span></a></p> <p>There are many ways you can get code into an S3 bucket, especially if you have a build/deploy server.   But what do you do when you do not have one?  One possible solution is to use Gulp to build/deploy the React application and publish to an S3 bucket.</p> <p>Before you get started I am going to make a few assumptions.</p> <ol> <li>You already know basic Gulp.js usage, if not details on this <a href="http://bit.ly/gulp-getting-started" target="_blank" rel="noopener">here</a></li> <li>You already have a React application, if not details on this <a href="http://bit.ly/react-tutorial-intro">here</a></li> <li>You already have an AWS account setup, if not details on this <a href="http://bit.ly/aws-getting-started">here</a> <ul> <li>You will also need to make sure you have your CLI configured w/ credentials.  See <a href="http://bit.ly/configure-aws-cli" target="_blank" rel="noopener">here </a>for details</li> </ul> </li> <li>You already have a build npm task in your package.json file, if not look <a href="http://bit.ly/gist-react-start">here</a></li> </ol> <p>To get started, the first thing we are going to need to install a few NPM packages</p> <ul> <li><a href="http://bit.ly/npm-aws-sdk" target="_blank" rel="noopener">aws-sdk</a></li> <li><a href="http://bit.ly/npm-gulp-run" target="_blank" rel="noopener">gulp-run</a></li> <li><a href="http://bit.ly/npm-run-sequence" target="_blank" rel="noopener">run-sequence</a></li> </ul> <div class="code-snippet"> <pre class="code-content">npm install --save-dev aws-sdk gulp-run run-sequence </pre> </div> <p>After you have the npm packages installed we will need to start working on your gulpfile.js changes</p> <p>Add the reference to the package we added</p> <div class="code-snippet"> <pre class="code-content">const run = require('gulp-run'); const AWS = require('aws-sdk'); const runSequence = require('run-sequence'); </pre> </div> <p>After we reference our packages we need to create an instance of the S3 object, this will be used to push the files into our S3 bucket.</p> <div class="code-snippet"> <pre class="code-content">const s3 = new AWS.S3({apiVersion: '2006-03-01'}); </pre> </div> <p>Now that we have all the prerequisite stuff out of the way, time to get down to business.</p> <p>We are going to create 3 tasks.</p> <ol> <li>deploy- -&gt; This will be used to kick off the entire process.</li> <li>build-react -&gt; This will be used to call the npm build (from your package.json)</li> <li>copy-to-bucket -&gt; This is what will do the actual deployment to S3 <ul> <li>This gulp task is going to make us of the aws skd <code>s3 cp</code> <a href="http://bit.ly/S3-AWS-CLI" target="_blank" rel="noopener">command</a>.</li> </ul> </li> </ol> <div class="code-snippet"> <pre class="code-content">gulp.task('build-react', buildReact); gulp.task('copy-to-bucket', copyToS3Bucket); gulp.task('deploy', function(done){ // we use runSequence to ensure that our tasks // sync vs asynch runSequence( 'build-react', 'copy-to-bucket', done); }); function buildReact() { return run("npm run build").exec(); } function copyToS3Bucket() { const pathToS3Bucket = "s3://the-name-of-my-bucket"; // I am using the --recurisve here in order top copy all the files in my build folder const publishPackageCommand = "aws s3 cp ./path-to-build-folder " + pathToS3Bucket + " --recursive"; return run(publishPackageCommand).exec(); } </pre> </div> <p>After you have implemented the changes above you can run gulp on the command line and watch the magic unfold.</p> <div class="code-snippet"> <pre class="code-content">gulp deploy </pre> </div> <p>If you would like to see a full working copy of my gulpfile.js checkout this <a href="http://bit.ly/gist-full-gulp-file" target="_blank" rel="noopener">gist</a>.</p> Containers - What Are They Good For? https://jimmybogard.com/containers-what-is-it-good-for/ Jimmy Bogard urn:uuid:1c40bf9d-8dfb-6282-c27b-224ef9c0f1bc Tue, 06 Feb 2018 21:21:00 +0000 <blockquote> <p>Containers, huh, good god <br> What is it good for? <br> Probably something? <br> - Edwin Starr (disputed)</p> </blockquote> <p>Here at <a href="https://www.headspring.com">Headspring</a>, we're seeing more and more usage of Docker for local development. Having not really touched Docker or containers, I wanted to understand how Docker could help make our lives easier for development,</p> <blockquote> <p>Containers, huh, good god <br> What is it good for? <br> Probably something? <br> - Edwin Starr (disputed)</p> </blockquote> <p>Here at <a href="https://www.headspring.com">Headspring</a>, we're seeing more and more usage of Docker for local development. Having not really touched Docker or containers, I wanted to understand how Docker could help make our lives easier for development, whether it's just local development, our CI/CD pipeline, production, anything really.</p> <p>I hadn't touched containers mainly because I really didn't have the problems that I (thought) it solved. I've done automated builds and deployments, scripted local dev setup, for over a decade now, so I wasn't sure how containers could streamline what I saw as a fairly streamlined process at was today. In my previous company, now over 10 years ago, we were doing blue-green deployments well before it had that sort of name, for a large (in the billions of $ in yearly revenue) e-commerce website, scripting out VM deployments before even Powershell was around to help script. That, and I don't do any Linux work, nor do any of my clients.</p> <p>That being said, I wanted to see what containers could do for me.</p> <h3 id="learningcontainers">Learning containers</h3> <p>First and foremost was really understanding what containers are. I'm not a Linux person, so when I hear "containers are just cgroups and namespaces" I'm already behind because I don't know what cgroups or namespaces are. It also doesn't help that evidently <a href="https://blog.jessfraz.com/post/containers-zones-jails-vms/">containers aren't actually a thing</a>. So...yeah.</p> <p>And on Windows, there are things called containers, but like Linux, it's not really a thing either, and there are <a href="https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/">different flavors</a>.</p> <p>Luckily, between the Docker website and the Windows website, there are tons of great resources for learning Docker and containers. Looking at our current build/deploy pipeline, I wanted to build a picture of what the Docker terms mean for how we typically build apps.</p> <h3 id="seminotreallyaccuratedockerterminology">Semi-not-really-accurate Docker terminology</h3> <p>Docker has quite a few terms and the world is similar, but not the same, as what I'm used to. The mapping I had in my head is roughly:</p> <blockquote> <p>Dockerfile =/= "build.ps1" <br> Image =/= "foo.exe" <br> Container =/= "$ foo.exe" <br> Docker-compose =/= "foo.sln" <br> Docker Hub =/= NuGet server</p> </blockquote> <p>Our projects have a build script that produces a runnable instance of the app. It's not quite "F5" so more or less it's equivalent to the build instructions of a dockerfile. Built images are kinda like our built app, and a running instance is similar to a container as a running image.</p> <p>A solution pulls together many different projects and applications into a single "run" experience, similar to a docker-compose configuration. And finally, we use Octopus Deploy and NuGet as our build artifacts, similar to Docker and Docker Hub.</p> <p>Semi-quasi-sorta-probably-not-accurate equivalent terminology, but close enough for my mental model.</p> <h3 id="pipelinebeforedocker">Pipeline before Docker</h3> <p>I wanted to understand how Docker could help our development, but first, let's look at what my typical CI/CD pipeline looks today:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/1/2018/Picture0018.png" alt="Pipeline"></p> <p>In this picture, I'm using some sort of Git source control server, Bitbucket, VSTS, whatever. For local development, it's usually just Visual Studio (or Code) and a database. The database is usually SQL Express, sometimes SQL Server Developer Edition if there are additional things going on.</p> <p>Because we can't assume everyone has the same instance name, our app would use environment variables for things like instance names/connection strings so that two developers wouldn't need to have the <em>exact</em> same environment.</p> <p>For continuous integration, we'll use AppVeyor or VSTS for cloud builds, or TeamCity or Jenkins for on-prem builds. Rarely we'd use TFS for on-prem, but it is there.</p> <p>The output of our build is an artifact, whose design matches our deployment needs. For Octopus Deploy, this would be one or more versioned NuGet packages, or with VSTS it'd just be the build artifact (typically a zip file).</p> <p>Finally, for continuous delivery, promoting packages between environments, we default to Octopus Deploy. We're looking at VSTS too, but it's somewhat similar. You promote immutable build artifacts from environment to environment, with environment configuration usually inside our CD pipeline configuration. Deployments would be Azure, AWS, or some bespoke on-prem servers.</p> <p>Everything is automated from end-to-end, local setup, builds, deployments.</p> <p>Given this current state, how could containers help our process, from local dev to production?</p> <p>In the next few posts, I'll walk through my journey of adding Docker/containers to each of these stages, to see where it works well for our typical clients, and where it still needs work.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=YPVr-LIcub0:z5OSw4Hzsqw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=YPVr-LIcub0:z5OSw4Hzsqw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=YPVr-LIcub0:z5OSw4Hzsqw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=YPVr-LIcub0:z5OSw4Hzsqw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=YPVr-LIcub0:z5OSw4Hzsqw:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/YPVr-LIcub0" height="1" width="1" alt=""/> Unable To Access Mysql With Root and No Password After New Install On Ubuntu http://blog.jasonmeridth.com/posts/unable-to-access-mysql-with-root-and-no-password-after-new-install-on-ubuntu/ Jason Meridth’s Blog urn:uuid:beb776fd-cb76-4d17-134c-e9a20edc7c5f Wed, 31 Jan 2018 00:13:00 +0000 This bit me in the rear end again today. Had to reinstall mysql-server-5.7 for other reasons. <p>This bit me in the rear end again today. Had to reinstall mysql-server-5.7 for other reasons.</p> <p>You just installed <code class="highlighter-rouge">mysql-server</code> locally for your development environment on a recent version of Ubuntu (I have 17.10 artful installed). You did it with a blank password for <code class="highlighter-rouge">root</code> user. You type <code class="highlighter-rouge">mysql -u root</code> and you see <code class="highlighter-rouge">Access denied for user 'root'@'localhost'</code>.</p> <p><img src="http://blog.jasonmeridth.com/assets/images/wat.png" alt="wat" /></p> <p>Issue: Because you chose to not have a password for the <code class="highlighter-rouge">root</code> user, the <code class="highlighter-rouge">auth_plugin</code> for my MySQL defaulted to <code class="highlighter-rouge">auth_socket</code>. That means if you type <code class="highlighter-rouge">sudo mysql -u root</code> you will get in. If you don’t, then this is NOT the fix for you.</p> <p>Solution: Change the <code class="highlighter-rouge">auth_plugin</code> to <code class="highlighter-rouge">mysql_native_password</code> so that you can use the root user in the database.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ sudo mysql -u root mysql&gt; USE mysql; mysql&gt; UPDATE user SET plugin='mysql_native_password' WHERE User='root'; mysql&gt; FLUSH PRIVILEGES; mysql&gt; exit; $ sudo systemctl restart mysql $ sudo systemctl status mysql </code></pre></div></div> <p><strong>NB</strong> ALWAYS set a password for mysql-server in staging/production.</p> <p>Cheers.</p> How Respawn Works https://jimmybogard.com/how-respawn-works/ Jimmy Bogard urn:uuid:ab84babb-199f-a174-c130-873cdd1242b2 Tue, 30 Jan 2018 22:20:14 +0000 <p>This post is mainly a reminder to myself when inevitably I forget what I was doing when designing <a href="https://github.com/jbogard/respawn">Respawn</a>. The <a href="https://lostechies.com/jimmybogard/2013/06/18/strategies-for-isolating-the-database-in-tests/">general problem space</a> is something I've covered quite a lot, but I haven't really walked through how Respawn works internally.</p> <p>The general problem is trying to find the correct order</p> <p>This post is mainly a reminder to myself when inevitably I forget what I was doing when designing <a href="https://github.com/jbogard/respawn">Respawn</a>. The <a href="https://lostechies.com/jimmybogard/2013/06/18/strategies-for-isolating-the-database-in-tests/">general problem space</a> is something I've covered quite a lot, but I haven't really walked through how Respawn works internally.</p> <p>The general problem is trying to find the correct order of deletion for tables when you have foreign key constraints. You can do something like:</p> <pre><code class="language-sql">ALTER TABLE [Orders] NOCHECK CONSTRAINT ALL; ALTER TABLE [OrderLineItems] NOCHECK CONSTRAINT ALL; DELETE [Orders]; DELETE [OrderLineItems]; ALTER TABLE [Orders] WITH CHECK CHECK CONSTRAINT ALL; ALTER TABLE [OrderLineItems] WITH CHECK CHECK CONSTRAINT ALL; </code></pre> <p>You just ignore what order you <em>can</em> delete things in by just disabling all constraints, deleting, then re-enabling. Unfortunately, this results in 3x as many SQL statements over just <code>DELETE</code>, slowing down your entire test run. Respawn fixes this by building the list of <code>DELETE</code> intelligently, and with 3.0, detecting circular relationships. But how?</p> <h3 id="traversingagraph">Traversing a graph</h3> <p>Assuming we've properly set up foreign key constraints, we can view our schema through its relationships:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/Picture0013.png" alt="Database schema"></p> <p>Another way to think about this is to imagine each table as a node, and each foreign key as an edge. But not just any kind of edge, there's a direction. Putting this together, we can construct a <a href="https://en.wikipedia.org/wiki/Directed_graph">directed graph</a>:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/Picture0014.png" alt="Directed graph"></p> <p>There's a special kind of graph, a directed acyclic graph, where there are no cycles, but we can't make that assumption. We only know there are directed edges.</p> <p>So why do we need this directed graph? Assuming we don't have any kind of cascades set up for our foreign keys, the order in which we delete tables should start tables with no foreign keys, then tables that reference those, then those that reference those, and so on until we reach the last table. The table that we delete first are those with no foreign keys pointing to it - because no tables depend on it.</p> <p>In directed graph terms, this is known as a <a href="https://en.wikipedia.org/wiki/Depth-first_search">depth-first search</a>. The ordered list of tables is found by conducting a depth-first search, adding the nodes at the deepest first until we reach the root(s) of the graph. When we're done, the list to delete is just the reversed list of nodes. As we traverse, we keep track of the visited nodes, exhausting our not-visited list until it's empty:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/graphtraversal2.gif" alt="Graph Traversal"></p> <p>The code for traversing is fairly straightforward, first we need to kick off the traversal based on the overall (unordered) list of tables:</p> <pre><code class="language-c#">private static List&lt;Table&gt; BuildDeleteList( HashSet&lt;Table&gt; tables) { var toDelete = new List&lt;Table&gt;(); var visited = new HashSet&lt;Table&gt;(); foreach (var table in tables) { BuildTableList(table, visited, toDelete); } return toDelete; } </code></pre> <p>Then we conduct our DFS, adding tables to our <code>toDelete</code> list once we've searched all its relationships recursively:</p> <pre><code class="language-c#">private static void BuildTableList( Table table, HashSet&lt;Table&gt; visited, List&lt;Table&gt; toDelete) { if (visited.Contains(table)) return; foreach (var rel in table.Relationships) { BuildTableList(rel.ReferencedTable, visited, toDelete); } toDelete.Add(table); visited.Add(table); } </code></pre> <p>Not pictured - constructing our <code>Table</code> with its <code>Relationships</code>, which is just a matter of querying the schema metadata for each database, and building the graph.</p> <p>To actually execute the deletes, we just build <code>DELETE</code> statements in the reverse order of our list, and we should have error-free deletions.</p> <p>This technique works, as we've shown, as long as there are no cycles in our graph. If our tables somehow create a cycle (self-referencing tables don't count, databases can handle a single <code>DELETE</code> on a table with relationships with itself), then our DFS will continue to traverse the cycle infinitely, until we hit a <code>StackOverflowException</code>.</p> <p>With Respawn 3.0, I wanted to fix this.</p> <h3 id="dealingwithcycles">Dealing with cycles</h3> <p>There's lots of literature and examples out there on <a href="https://www.google.com/search?q=detect+cycle+in+directed+graph">detecting cycles in a directed graph</a>, but I don't just need to detect a cycle. I have to understand, what should I do when I detect a cycle? How should this affect the overall deletion strategy?</p> <p>For this, it really depends on the database. The overall problem is the foreign key constraints prevent us from deleting with orphans, but with a cycle in our graph, I can't delete tables one-by-one.</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/Picture0015.png" alt="Cyclical graph"></p> <p>We could try to re-model our schema to remove the cycle, but above, the cycle might make the most sense. A project as a single project lead, and for a user, they have a primary project they work on. Ignoring how we'd figure it out, what would be the right set of commands to do? We still have to worry about the rest of the tables, too.</p> <p>What we'd like to do is find the problematic tables and just disable the constraints for those tables (or disable those constraints). This depends on the database, however, on how we can disable constraints. Each database is slightly different:</p> <ul> <li>SQL Server - per constraint or table</li> <li>PostgreSQL - per table</li> <li>Oracle - per constraint</li> <li>MySQL - entire session</li> </ul> <p>For MySQL, this makes our job a bit easier. We can disable constraints at the beginning of our deletion, and the order of the tables no longer matters.</p> <p>The other databases can disable individual constraints and/or constraints for the entire table. Given the choice, it's likely fewer SQL statements just to disable for tables, but Oracle lets us only disable individual constraints. So that means that our algorithm must:</p> <ul> <li>Find ALL cycles (not just IF there is a cycle)</li> <li>Find all tables and relationships involved the cycles</li> <li>Identify these problematic relationships when traversing</li> </ul> <p>Our updated graph now looks like:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/Picture0016.png" alt="Cyclic graph"></p> <p>We need to detect the relationship between <code>Projects</code> and <code>Users</code>, keep those in a list, and finally note those relationships when traversing the entire graph.</p> <p>In order to find all the cycles, once we find one, we can note it, continuing on instead of repeating the same paths:</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/cycle2.gif" alt="cycle detection"></p> <p>We need to alter our algorithm a bit, first to keep track of our new lists for not visited, visiting, and visited. We also need to return not just the ordered list of tables to delete via a stack, but the set of relationships that we identified as causing a cycle:</p> <pre><code class="language-c#">private static bool HasCycles(Table table, HashSet&lt;Table&gt; notVisited, HashSet&lt;Table&gt; visiting, HashSet&lt;Table&gt; visited, Stack&lt;Table&gt; toDelete, HashSet&lt;Relationship&gt; cyclicalRelationships ) { if (visited.Contains(table)) return false; if (visiting.Contains(table)) return true; notVisited.Remove(table); visiting.Add(table); foreach (var relationship in table.Relationships) { if (HasCycles(relationship.ReferencedTable, notVisited, visiting, visited, toDelete, cyclicalRelationships)) { cyclicalRelationships.Add(relationship); } } visiting.Remove(table); visited.Add(table); toDelete.Push(table); return false; } </code></pre> <p>Walking through the code, we're examining one table at a time. If the set of visited tables already contains this table, there's nothing to do and we can just return <code>false</code>.</p> <p>If, however, the set of currently visiting tables contains the table then we've detected a cycle, so return <code>true</code>.</p> <p>Next, we move our table from the set of <code>notVisited</code> to currently <code>visiting</code> tables. We loop through our relationships, to see if any complete a cycle. If that relationship does, we add that relationship to the set of cyclical relationships.</p> <p>Finally, once we're done navigating our relationships, we move our table from the set of visiting to visited tables, and push our table to the stack of tables to delete. We switched to a stack so that we can just pop off the tables and that becomes the right order to delete.</p> <p>To kick things off, we just loop through the collection of all our tables, checking each for a cycle and building up our list of tables to delete and bad relationships:</p> <pre><code class="language-c#">private static (HashSet&lt;Relationship&gt; cyclicRelationships, Stack&lt;Table&gt; toDelete) FindAndRemoveCycles(HashSet&lt;Table&gt; allTables) { var notVisited = new HashSet&lt;Table&gt;(allTables); var visiting = new HashSet&lt;Table&gt;(); var visited = new HashSet&lt;Table&gt;(); var cyclicRelationships = new HashSet&lt;Relationship&gt;(); var toDelete = new Stack&lt;Table&gt;(); foreach (var table in allTables) { HasCycles(table, notVisited, visiting, visited, toDelete, cyclicRelationships); } return (cyclicRelationships, toDelete); } </code></pre> <p>It's expected that we'll visit some tables twice, but that's a quick check in our set and we only traverse the entire graph once.</p> <p>Now that we have our list of bad tables/relationships, we can apply the right SQL DDL to disable/enable constraints intelligently. By only targeting the bad relationships, we can minimize the extra DDL statements by adding just add a single DDL statement for each cycle found.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=E9GMe8vYeBo:IahrEJiuGgA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=E9GMe8vYeBo:IahrEJiuGgA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=E9GMe8vYeBo:IahrEJiuGgA:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=E9GMe8vYeBo:IahrEJiuGgA:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=E9GMe8vYeBo:IahrEJiuGgA:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/E9GMe8vYeBo" height="1" width="1" alt=""/> Respawn 3.0 Released https://jimmybogard.com/respawn-3-0-released/ Jimmy Bogard urn:uuid:9a7f3fcc-4824-5dd9-558e-f8a7d2feee44 Mon, 29 Jan 2018 22:11:25 +0000 <p><a href="https://github.com/jbogard/Respawn">Respawn</a>, the intelligent database deleter, reached the <a href="https://github.com/jbogard/Respawn/releases/tag/v3.0.0">3.0 milestone</a> today. In this release, Respawn now supports complex circular/cyclical relationships.</p> <p>When Respawn detects a cycle in the graph, it substitutes a separate deletion strategy by disabling/enabling foreign key constraints just for those tables affected.</p> <p>This release also adds</p> <p><a href="https://github.com/jbogard/Respawn">Respawn</a>, the intelligent database deleter, reached the <a href="https://github.com/jbogard/Respawn/releases/tag/v3.0.0">3.0 milestone</a> today. In this release, Respawn now supports complex circular/cyclical relationships.</p> <p>When Respawn detects a cycle in the graph, it substitutes a separate deletion strategy by disabling/enabling foreign key constraints just for those tables affected.</p> <p>This release also adds support for Oracle, and drops support for SQL Server CE, bringing the supported databases to:</p> <ul> <li>SQL Server</li> <li>PostgreSQL</li> <li>MySQL/MariaDB</li> <li>Oracle</li> </ul> <p>Download via <a href="https://www.nuget.org/packages/Respawn/3.0.0">NuGet</a>:</p> <pre><code>Install-Package Respawn </code></pre> <p>Or your favorite package manager.</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=sHJZV-7DezQ:2OYvK92JrPY:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=sHJZV-7DezQ:2OYvK92JrPY:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=sHJZV-7DezQ:2OYvK92JrPY:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=sHJZV-7DezQ:2OYvK92JrPY:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=sHJZV-7DezQ:2OYvK92JrPY:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/sHJZV-7DezQ" height="1" width="1" alt=""/> Designing Microservice Messages: A Primer https://jimmybogard.com/designing-microservices/ Jimmy Bogard urn:uuid:e48ab69c-e6cb-0ab0-a8f3-d8f8caf5f8ba Mon, 22 Jan 2018 22:15:20 +0000 <p>When you move from monoliths to microservices, and your services aren't 100% isolated from each other, eventually you need your microservices to communicate. They need to expose their capabilities to other applications and systems, and when you get to this point, you need to design their means of communication.</p> <p>Microservices</p> <p>When you move from monoliths to microservices, and your services aren't 100% isolated from each other, eventually you need your microservices to communicate. They need to expose their capabilities to other applications and systems, and when you get to this point, you need to design their means of communication.</p> <p>Microservices doesn't prescribe a specific mode of messaging (nor should it), and I personally like the presentation <a href="https://www.youtube.com/watch?v=rXi5CLjIQ9k">"Messaging and Microservices"</a> <a href="https://gotocon.com/dl/goto-amsterdam-2016/slides/ClemensVasters_MessagingAndMicroservices.pdf">(slides)</a> from <a href="http://vasters.com/blog/">Clemens Vasters</a> as a great overall primer to the problem space, so start there.</p> <p>Once we understand the overall landscape of messaging and microservices, the next part is design. How do we design the communication between different services? How do we build the channels so that we maintain high cohesion, low coupling, and ultimately, autonomy for our services? What are some general guidelines and principles for ensuring our success here?</p> <p><img src="https://jimmybogardsblog.blob.core.windows.net/jimmybogardsblog/0/2018/Picture0012.png" alt="Service A to Service B Communication"></p> <p>Luckily, the names of things of changed over the years but the rules stay largely the same. For (micro)services-based systems I've run into that haven't been as successful, designing the boundaries and interactions goes a long way into ensuring the success of your overall architecture.</p> <h3 id="theentrancefeegoodboundaries">The Entrance Fee - Good Boundaries</h3> <p>Designing microservice boundaries well is our first goal. I often find that when messages are designed poorly, it's simply because the original boundaries weren't designed well. Well-defined microservice boundaries tend to be self-evident, because they achieve the goal of autonomy. Poorly-defined microservice boundaries tend to lose their autonomy, relying on a group of services to actually perform a business goal.</p> <p>If you have to run a dozen microservices for your service to even run, it's a sign you've chosen the wrong boundaries. Instead of focusing on the nouns in your system (Orders, Customers, Products), you instead should focus on capabilities (Catalog, Checkout). Services that just manage a thing tend to quite data-oriented and RPC-focused. Rather than focusing on tiers, we look at vertical slices that stretch all the way to the front-end.</p> <p>But once we have those boundaries, how do we actually design the communication between services? In the next few posts, I'll walkthrough designing messages both inside and between services, looking at the benefits and tradeoffs of each approach, as well as some general rules of thumb and patterns that tend to let us fall into the pit of microservice success.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=c4E4bk_8_ro:C8_uCLOaWSk:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=c4E4bk_8_ro:C8_uCLOaWSk:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=c4E4bk_8_ro:C8_uCLOaWSk:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=c4E4bk_8_ro:C8_uCLOaWSk:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=c4E4bk_8_ro:C8_uCLOaWSk:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/c4E4bk_8_ro" height="1" width="1" alt=""/> New Job http://blog.jasonmeridth.com/posts/new-job/ Jason Meridth’s Blog urn:uuid:a0b0c062-5e64-14aa-a16e-7b1f35b11ecc Mon, 08 Jan 2018 18:13:00 +0000 Well, it is a new year and I’ve started a new job. I am now a Senior Software Engineer at True Link Financial. <p>Well, it is a new year and I’ve started a new job. I am now a Senior Software Engineer at <a href="https://truelinkfinancial.com">True Link Financial</a>.</p> <p><img src="http://blog.jasonmeridth.com/assets/images/tllogo.png" alt="true link financial logo" /></p> <p>After interviewing with the co-founders Kai and Claire and their team, I knew I wanted to work here.</p> <p><strong>TL;DR</strong>: True Link: We give elderly and disable (really, anyone) back their financial freedom where they may not usually have it.</p> <p>Longer Version: Imagine you have an elderly family member who may start showing signs of dimensia. You can give them a True Link card and administer their card. You link it to their bank account or another source of funding and you can set limitations on when, where and how the card can be used. The family member feels freedom by not having to continually ask for money but is also protected by scammers and non-friendly people (yep, they exist).</p> <p>The customer service team, the marketing team, the product team, the engineering team and everyone else at True Link are amazing.</p> <p>For any nerd readers, the tech stack is currently Rails, React, AWS, Ansible. We’ll be introducing Docker and Kubernetes soon hopefully, but always ensuring the right tools for the right job.</p> <p>Looking forward to 2018.</p> <p>Cheers.</p> The Gemba Walk for Designing Software https://jimmybogard.com/the-gemba-walk-for-designing-software/ Jimmy Bogard urn:uuid:323f8d30-e909-41c1-636d-f0fe14f7f343 Thu, 04 Jan 2018 16:39:29 +0000 <p>The <a href="https://www.lean.org/shook/displayobject.cfm?o=1843">Gemba Walk</a> is a common lean practice for understanding a current process as-is before taking any action to improve that process. Traditionally, you'd see this in lean manufacturing, where you'd walk through a process, in-person, to understand the purpose, process, and people involved in creating some sort of value</p> <p>The <a href="https://www.lean.org/shook/displayobject.cfm?o=1843">Gemba Walk</a> is a common lean practice for understanding a current process as-is before taking any action to improve that process. Traditionally, you'd see this in lean manufacturing, where you'd walk through a process, in-person, to understand the purpose, process, and people involved in creating some sort of value or product.</p> <p>For whatever reason, the "Gemba Walk" is dismissed more or less from lean software engineering as a means of walking the value chain of the software development process. I'm dubious about this - with all new clients, I walk the value chain from <a href="https://www.amazon.com/Implementing-Lean-Software-Development-Concept/dp/0321437381">concept to cash</a>.</p> <p>But that's not the walk what I'm referring to. Not walking the software delivery process, but walking the process for whom the software we're building.</p> <h3 id="traditionalgembawalk">Traditional Gemba Walk</h3> <p><a href="https://en.wikipedia.org/wiki/Gemba">Gemba</a> (or genba) is a term that refers to the place where work is done. The idea behind a Gemba Walk is to literally go to where the work is done, inspect, and understand the actual physical process happening in order to make any sort of process improvements.</p> <p>Rather than discussing how to improve a shipment packing process in a sterile conference room on a whiteboard, you first do a Gemba Walk. The walk consists of three parts:</p> <ol> <li>Go See </li> <li>Ask Why </li> <li>Show Respect</li> </ol> <p>The goal is not to put blame on any one person for why the process is bad, but to fully understand (with respect) the people, process, and purpose. During the walk, we don't talk about any future improvements or anything of the like. Just focus on understanding.</p> <p>You'll often do two Gemba Walks, where the first just to generate the value stream map, and the second to understand how well the value stream performs (see <a href="https://www.amazon.com/Value-Stream-Mapping-Organizational-Transformation/dp/0071828915/">Value Stream Mapping</a> for more).</p> <p>There's way more to this practice, but let's look at how a Gemba Walk applies to software development.</p> <h3 id="gembawalksfordesigningsoftware">Gemba Walks for Designing Software</h3> <p>During a Gemba Walk, I'm faced with many different "truths" to a process:</p> <ol> <li>What managements believes/wants the process to be </li> <li>What the process is documented to be </li> <li>What we aspire for the process to be </li> <li>What the process actually is</li> </ol> <p>We build software for a purpose. And to fully achieve that purpose, we need to have a deep understanding of the problem we're trying to solve. We can help this understanding by having a product owner or subject matter expert or domain expert, but in my experience, those people only really give us the information for #1-#3 above. We don't see what the existing process <em>actually</em> is.</p> <p>Seeing, especially with outside eyes, and removing <a href="https://en.wikipedia.org/wiki/Inattentional_blindness">inattentional blindness</a> is absolutely critical to building software that <em>actually solves the underlying problem</em>. It's not enough to just ask the <a href="https://en.wikipedia.org/wiki/5_Whys">5 whys</a>. We have to see the process we're trying to improve. We'll completely miss the deeper level of understanding that's so critical to building software that can truly transform an organization.</p> <p>Inattentional blindness is one of the leading factors for bad requirements and rework, in my experience. Someone inside of a process all day doesn't notice that that simple 3-step process is actually 10 steps. Or they might describe the process only in terms of the exceptional cases, instead of the typical case. Or skip over all of the manual steps they perform and only describe their process in terms of interfacing with their existing legacy system. Or the process is not in any one person's head, but in everyone's. I've been on multiple Gemba Walks where the client itself argues amongst themselves about the process and <em>they</em> learn what their process actually is.</p> <p>You can't do a Gemba Walk without the 3rd pillar - Show Respect. The Gemba Walk is not about me "not believing" or "not trusting" the domain expert, and it's important that <em>they</em> understand this as well. It's not a judgement on their ability to explain. Some things just can't be verbally explained, and having many eyes on a process results in the greatest understanding.</p> <h3 id="gembawalkinpractice">Gemba Walk in Practice</h3> <p>My Gemba Walks are the very first thing I do on a new project. Kickoffs aren't just about meet and greets, I need to see and understand the problem at hand to provide the appropriate context for what I'm about to build. It's so important that we at <a href="https://headspring.com/">Headspring </a> insist on doing these walks, at the start of a project and at the start of any new area of software we're building.</p> <p>Besides just understanding the process, the second part (looking for improvements), is focused on how the software we're building can improve that process. I'm also looking at ways to improve the process <em>without</em> software as well, because any software I don't write is software I don't have to maintain.</p> <p>In the second walk, I'm focusing on process improvements, specifically:</p> <ul> <li>How long does it take you to perform this step?</li> <li>How often do you do this?</li> <li>How after you're done until the next step?</li> <li>Do you do one-at-a-time, or do you batch up work?</li> <li>What are the exceptional cases?</li> <li>Look for the Lean wastes</li> <li>What is the value (monetary/time/etc) of the work?</li> </ul> <p>I carry a little notepad to write down the steps and answers to these (and more) questions as we walk. I can't keep everything in my head, so after the walk we'll go back and review to make sure our collective understanding was the same, capturing the results in a more permanent format.</p> <p>When trying to show the value of your software, it's great to have that empirical data to show improvements (lead time for acquisitions went from 90 days to 7, etc.) and the Gemba Walk provides the means for doing so. I'm looking at lead time, cycle time, and throughput. It doesn't have to be 100% accurate, but it can give us that extra set of information for management to fully understand the potential and realized value of their software.</p> <p>So, as software developers, we have to get out of our chairs and go to the work being done if we want to build applications and systems that can truly transform an organization. Without a Gemba Walk, we're merely guessing.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=vHfUtOC22Dw:Z5H9JruJ4a0:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=vHfUtOC22Dw:Z5H9JruJ4a0:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=vHfUtOC22Dw:Z5H9JruJ4a0:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=vHfUtOC22Dw:Z5H9JruJ4a0:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=vHfUtOC22Dw:Z5H9JruJ4a0:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/vHfUtOC22Dw" height="1" width="1" alt=""/> Respawn 2.0 released https://jimmybogard.com/respawn-2-0-released/ Jimmy Bogard urn:uuid:876268df-8ae8-3218-e2f4-89713dd72c44 Wed, 03 Jan 2018 19:00:03 +0000 <p>A small release for <a href="https://github.com/jbogard/Respawn">Respawn</a> but there's a breaking change in the underlying extension API, hence the major version bump. A couple things added here:</p> <ul> <li><a href="https://github.com/jbogard/Respawn/pull/26">Support for MySQL</a></li> <li><a href="https://github.com/jbogard/Respawn/pull/30">Support for Amazon RDS</a> (from a bug fixed)</li> </ul> <p>The API change was added so that database adapters can specify the quote character</p> <p>A small release for <a href="https://github.com/jbogard/Respawn">Respawn</a> but there's a breaking change in the underlying extension API, hence the major version bump. A couple things added here:</p> <ul> <li><a href="https://github.com/jbogard/Respawn/pull/26">Support for MySQL</a></li> <li><a href="https://github.com/jbogard/Respawn/pull/30">Support for Amazon RDS</a> (from a bug fixed)</li> </ul> <p>The API change was added so that database adapters can specify the quote character to use. With this, we now have 4 databases supported:</p> <ul> <li>MS SQL Server</li> <li>MS SQL Server Compact Edition</li> <li>PostgreSQL</li> <li>MySQL</li> </ul> <p><a href="https://www.nuget.org/packages/respawn">Download from NuGet</a></p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=yBr5F01Z8Cg:y0V6T2zlfiw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=yBr5F01Z8Cg:y0V6T2zlfiw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=yBr5F01Z8Cg:y0V6T2zlfiw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=yBr5F01Z8Cg:y0V6T2zlfiw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=yBr5F01Z8Cg:y0V6T2zlfiw:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/yBr5F01Z8Cg" height="1" width="1" alt=""/> Docker Daemon Error When Running Docker Compose http://blog.jasonmeridth.com/posts/docker-daemon-error-when-running-docker-compose/ Jason Meridth’s Blog urn:uuid:ceaee00b-4bdb-df6b-c310-5025a90e08ba Tue, 02 Jan 2018 18:11:00 +0000 <p><img src="http://blog.jasonmeridth.com/assets/images/why-docker-why-i-just-want-to-deploy.jpg" alt="docker why" /></p> <p>TL;DR Make sure you don’t have any old mounted volumes around if you see the error below.</p> <p>I just got the following error when trying to run <code class="highlighter-rouge">docker-compose up -d</code></p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running? If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. </code></pre></div></div> <p>Please note I have a database using a mounted volume. The issue was that the previous mount was still present. Once I deleted that, <code class="highlighter-rouge">docker-compose up -d</code> would work jut fine.</p> <p>If anyone knows a more elegant way to handle this, I’m open to it.</p> <p>Cheers.</p> Trunk-Based Development or Pull Requests - Why Not Both? https://jimmybogard.com/trunk-based-development-or-pull-requests-why-not-both/ Jimmy Bogard urn:uuid:36c64238-99ba-e828-494b-31bc71e2172d Wed, 06 Dec 2017 17:37:17 +0000 <p><a href="https://trunkbaseddevelopment.com/">Trunk-Based Development</a> movement is often proposed as the alternative to <a href="https://datasift.github.io/gitflow/IntroducingGitFlow.html">Git Flow</a>. I understand this distinction - managing streams of work through long-lived branches can be trouble. For those new to TBD, it can look like this means throwing away <em>all</em> branches and everyone commits literally to trunk. Typically this</p> <p><a href="https://trunkbaseddevelopment.com/">Trunk-Based Development</a> movement is often proposed as the alternative to <a href="https://datasift.github.io/gitflow/IntroducingGitFlow.html">Git Flow</a>. I understand this distinction - managing streams of work through long-lived branches can be trouble. For those new to TBD, it can look like this means throwing away <em>all</em> branches and everyone commits literally to trunk. Typically this comes out as "feature branches are bad". But I've seen the opposite - it's not that <em>feature branches</em> are bad, but that long-lived branches are bad.</p> <p>While there are plenty in the Agile/TBD movement that see this as true, I've never seen it as an either/or. First, why is TBD good and long-running branches bad? To see why TBD works in a lot of cases, we can see the reasons all the way back to Lean/Kanban principles.</p> <h2 id="limitwip">Limit WIP</h2> <p>When you have many commits in a branch, or lots of work in a branch, we're building up inventory of work. The more our inventory builds, the more difficult it becomes to move that work through the development pipeline. This is why when I was on teams that did 6-month deployments, our deployments were so incredibly painful. We had months of work to test, verify, validate, and ship across many different departments. We literally brought sleeping bags to work.</p> <p>Most see limiting WIP strictly in terms of the development activities. Don't work on more than one ticket at a time. But what's missing is a visualization of the <em>entire</em> value stream, that goes from concept to production. When we have long-running branches, that's WIP piling up before production.</p> <p>And if your development value stream doesn't include "deploy to production", you're missing the most valuable step in the process - shipping!</p> <p>The more we can limit WIP across the entire pipeline, the lower our cycle time can get. For that, long-lived branches that hold WIP go directly against the Lean concept of limiting WIP.</p> <h2 id="reducingbatchsize">Reducing batch size</h2> <p>Going along with reducing WIP is reducing the size of each item in our process. For software development, this means working in small chunks, and each small chunk represents a shippable item. We call these "features" or "stories", but the main goal is to reduce the size of these features/stories so that the items go through the system more quickly and with less variation.</p> <p>Interestingly enough, the side effect of small features and common-sizing work means that estimation no longer becomes a value-add activity (if it ever was). The <a href="https://ronjeffries.com/xprog/articles/the-noestimates-movement/">#NoEstimates</a> movement captured this explicitly - that it's more valuable to measure the cycle time, lead time, and throughput as those serve as better predictors for future delivery than committed estimates (because that's what estimatese are - commitments).</p> <p>With these two items together - limiting WIP and reducing batch size, it leads us to the conclusion that we want our "features" or "stories" to be small, and therefore no long-lived branches.</p> <h2 id="bringingpullrequestsin">Bringing Pull Requests In</h2> <p>Finally, we come to pull requests. If our features are small, then feature branching by itself isn't bad. We've tackled the problem at its root - driving to smaller features instead of just saying "branches are bad".</p> <p>For us, pull requests are a means of driving quality into the process. Inspections at the point in time mean we don't have defects further down the process that become more expensive the later we find it. Pull requests answer two questions - "is the code right?" and "is it the right code?". Checking to see that the feature meets the expectations and understanding before it hits production (or the next step even) can be critical on our projects.</p> <p>An alternative is pair programming, but in my experience, pairing just isn't for every person or every task. We tend to pair when we need to, and work alone when we need to, with most work being alone.</p> <p>So drive to small features first, limit WIP and reduce batch size, and the rest will follow.</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Cq8CkLNioe0:0a_DMRJmzI4:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Cq8CkLNioe0:0a_DMRJmzI4:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Cq8CkLNioe0:0a_DMRJmzI4:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Cq8CkLNioe0:0a_DMRJmzI4:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Cq8CkLNioe0:0a_DMRJmzI4:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/Cq8CkLNioe0" height="1" width="1" alt=""/> MediatR 4.0 Released https://jimmybogard.com/mediatr-4-0-released/ Jimmy Bogard urn:uuid:68e779dc-5987-caa7-8c42-2d6de2baaac9 Fri, 01 Dec 2017 15:40:56 +0000 <p>The last major release of MediatR brought a simplification in design. Instead of having several different <code>IRequest</code> types and <code>IRequestHandler</code> implementations with several flavors, MediatR would try at runtime to determine how a single <code>IRequest</code> should resolve to different <code>IRequestHandler</code> implementations (synchronous, async, async with a cancellation token). In practice,</p> <p>The last major release of MediatR brought a simplification in design. Instead of having several different <code>IRequest</code> types and <code>IRequestHandler</code> implementations with several flavors, MediatR would try at runtime to determine how a single <code>IRequest</code> should resolve to different <code>IRequestHandler</code> implementations (synchronous, async, async with a cancellation token). In practice, this proved problematic as not all containers support this sort of 'try-resolve' behavior. MediatR relied on <code>try...catch</code> to resolve, which can work, has unintended side effects.</p> <p>MediatR 4.0 consolidates these design decisions. We'll still have a single <code>IRequest</code> type, but will now have only a single <code>IRequestHandler</code> interface. This simplifies the registrations quite a bit:</p> <pre><code class="language-c#">// Before (StructureMap) cfg.Scan(scanner =&gt; { scanner.AssemblyContainingType&lt;Ping&gt;(); scanner.ConnectImplementationsToTypesClosing(typeof(IRequestHandler&lt;,&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(IRequestHandler&lt;&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(IAsyncRequestHandler&lt;,&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(IAsyncRequestHandler&lt;&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(ICancellableAsyncRequestHandler&lt;,&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(ICancellableAsyncRequestHandler&lt;&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(INotificationHandler&lt;&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(IAsyncNotificationHandler&lt;&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(ICancellableAsyncNotificationHandler&lt;&gt;)); }); // After cfg.Scan(scanner =&gt; { scanner.AssemblyContainingType&lt;Ping&gt;(); scanner.ConnectImplementationsToTypesClosing(typeof(IRequestHandler&lt;,&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(IRequestHandler&lt;&gt;)); scanner.ConnectImplementationsToTypesClosing(typeof(INotificationHandler&lt;&gt;)); }); </code></pre> <p>This new default interface is the previous <code>ICancellableAsyncRequestHandler</code> interface:</p> <pre><code class="language-c#">public interface IRequestHandler&lt;in TRequest, TResponse&gt; where TRequest : IRequest&lt;TResponse&gt; { Task&lt;TResponse&gt; Handle(TRequest request, CancellationToken cancellationToken); } public interface IRequestHandler&lt;in TRequest&gt; where TRequest : IRequest { Task Handle(TRequest message, CancellationToken cancellationToken); } </code></pre> <p>Instead of multiple interfaces, MediatR 4.0 (re)introduces helper base classes:</p> <ul> <li>RequestHandler - for synchronous actions</li> <li>AsyncRequestHandler - for async actions that ignore the cancellation token</li> </ul> <p>With the simplified interfaces, it's much less possible to "forget" a registration in your container.</p> <p>Another small but breaking change is the behavior pipeline and pre-processor now have the cancellation token as part of the interface.</p> <p>Both the <a href="https://www.nuget.org/packages/mediatr">MediatR</a> and <a href="https://www.nuget.org/packages/MediatR.Extensions.Microsoft.DependencyInjection/">MediatR.Extensions.Microsoft.DependencyInjection</a> packages are released with the simplified API.</p> <p>Since these are breaking changes to the API (hence the major version bump), migration to the new API is manual. However, to ease the transition, you can follow the <a href="https://github.com/jbogard/MediatR/wiki/Migration-Guide---3.x-to-4.0">3.0 to 4.0 migration guide</a>. I've also updated the <a href="https://github.com/jbogard/MediatR/wiki">docs</a> to walk through the new classes, and the <a href="https://github.com/jbogard/MediatR/tree/master/samples">samples</a> include the updated registrations for the major containers out there:</p> <ul> <li>Microsoft DI</li> <li>Autofac</li> <li>DryIoc</li> <li>LightInject</li> <li>Ninject</li> <li>Simple Injector</li> <li>StructureMap</li> <li>Unity</li> <li>Windsor</li> </ul> <p>A big move for MediatR to break the API, but necessary to remove the complexity in multiple interfaces and registration. The only other option would be to make MediatR responsible for all registration and wiring, which is more than I want MediatR to do. The design goal of MediatR is to lean on the power of DI containers for registration, and MediatR merely resolves its interfaces.</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Lu-kFzO4lcc:oiHWQusoLmw:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Lu-kFzO4lcc:oiHWQusoLmw:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Lu-kFzO4lcc:oiHWQusoLmw:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=Lu-kFzO4lcc:oiHWQusoLmw:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=Lu-kFzO4lcc:oiHWQusoLmw:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/Lu-kFzO4lcc" height="1" width="1" alt=""/> Lenovo Thinkpad - Swap ctrl and fn keys http://blog.jasonmeridth.com/posts/lenovo-thinkpad-swap-ctrl-and-fn-keys/ Jason Meridth’s Blog urn:uuid:0906314b-681e-bc30-7c8b-651cc73f1981 Wed, 22 Nov 2017 16:18:00 +0000 I just got a new laptop. It is a Lenovo Thinkpad X1 Carbon 5th Gen. <p>I just got a new laptop. It is a Lenovo Thinkpad X1 Carbon 5th Gen.</p> <p>Ubuntu 17.10 16 GB RAM 1 TB SSD 64-bit i7 Pentium USB-C power</p> <p>I’m in love.</p> <p><img src="http://blog.jasonmeridth.com/assets/images/lenovo-thinkpad-box.jpg" alt="lenovo thinkpad box" /></p> <p><img src="http://blog.jasonmeridth.com/assets/images/lenovo-thinkpad.jpg" alt="lenovo thinkpad" /></p> <p><img src="http://blog.jasonmeridth.com/assets/images/lenovo-thinkpad-fn-ctrl-keys.jpg" alt="lenovo thinkpad fn ctrl keys" /></p> <p>My only gripe is that the <code class="highlighter-rouge">Fn</code> key is on the far left bottom of the keyboard. I prefer that to be the <code class="highlighter-rouge">Ctrl</code> key due to copy/paste and other keyboard commands I use often. I also have very big hands and my pinky isn’t made to “find” the <code class="highlighter-rouge">Ctrl</code> key to the right of the <code class="highlighter-rouge">Fn</code> key.</p> <p>I currently use <code class="highlighter-rouge">Dconf</code> for mapping of keys in Ubuntu (currently using 17.10). I learned today that the <code class="highlighter-rouge">fn</code> key on keyboards is not managed by the operating system, which makes sense. Lenovo in all of its glorious-ness has a BIOS option to swap the <code class="highlighter-rouge">Fn</code> and <code class="highlighter-rouge">Ctrl</code> keys.</p> <p><img src="http://blog.jasonmeridth.com/assets/images/lenovo-bios-fn-ctrl-swap.jpg" alt="lenovo bios fn ctrl swap" /></p> <p>Thank you Lenovo.</p> <p>Cheers.</p> Ubuntu - set caps lock to escape http://blog.jasonmeridth.com/posts/ubuntu-set-caps-lock-to-escape/ Jason Meridth’s Blog urn:uuid:65dbca0b-7d7f-8fb6-6d0e-f29a1d639513 Wed, 22 Nov 2017 16:05:00 +0000 I just got a new laptop and had to google again on how to set caps lock key to escape (I’m a Vim user). <p>I just got a new laptop and had to google again on how to set caps lock key to escape (I’m a Vim user).</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install dconf-tools dconf write /org/gnome/desktop/input-sources/xkb-options "['caps:escape']" </code></pre></div></div> <p>To know your options, use the following command:</p> <p><code class="highlighter-rouge">man xkeyboard-config</code></p> <p>(MAN pages are your friend; man is short for manual)</p> <p>You can also now use the <code class="highlighter-rouge">Dconf</code> GUI editor if you must (SHAME! ;) )</p> <p>Type <code class="highlighter-rouge">Dconf</code> in Unity or Gnome app opener and go to the following location:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>`org` →- `gnome` → `desktop` → `input-sources` → `xkb-options` </code></pre></div></div> <p>Add <code class="highlighter-rouge">['caps:escape']</code> to <code class="highlighter-rouge">Custom Value</code> textbox.</p> <p><img src="http://blog.jasonmeridth.com/assets/images/dconf-caps-lock-to-escape.png" alt="dconf caps lock to escape" /></p> <p>Cheers.</p> AutoMapper 6.2.1 Released https://jimmybogard.com/automapper-6-2-1-released/ Jimmy Bogard urn:uuid:79785613-013e-f36e-551e-9a7dce00e47f Thu, 16 Nov 2017 15:02:16 +0000 <p><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v6.2.1">Release notes here</a>.</p> <p>The previous release introduced some inadvertent breaking behavior changes in convention-based map creation and some DI scenarios. This maintenance release fixes these two issues by:</p> <ul> <li><a href="http://docs.automapper.org/en/v6.2.1/Configuration.html#resetting-static-mapping-configuration">Allowing configuration of member lists to validate per-map</a></li> <li><a href="http://docs.automapper.org/en/v6.2.1/Configuration.html#resetting-static-mapping-configuration">Allowing resetting of static mapper configuration (for testing scenarios)</a></li> </ul> <p>Find this release on NuGet:</p> <p><a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v6.2.1">Release notes here</a>.</p> <p>The previous release introduced some inadvertent breaking behavior changes in convention-based map creation and some DI scenarios. This maintenance release fixes these two issues by:</p> <ul> <li><a href="http://docs.automapper.org/en/v6.2.1/Configuration.html#resetting-static-mapping-configuration">Allowing configuration of member lists to validate per-map</a></li> <li><a href="http://docs.automapper.org/en/v6.2.1/Configuration.html#resetting-static-mapping-configuration">Allowing resetting of static mapper configuration (for testing scenarios)</a></li> </ul> <p>Find this release on NuGet:</p> <p><a href="https://www.nuget.org/packages/AutoMapper/">https://www.nuget.org/packages/AutoMapper/</a></p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CYw99DG-cQg:cT6zix1c8G4:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CYw99DG-cQg:cT6zix1c8G4:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=CYw99DG-cQg:cT6zix1c8G4:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=CYw99DG-cQg:cT6zix1c8G4:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=CYw99DG-cQg:cT6zix1c8G4:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/CYw99DG-cQg" height="1" width="1" alt=""/> AutoMapper extensions for Microsoft DI 3.2.0 released https://jimmybogard.com/automapper-extensions-for-microsoft-di-3-2-0-released/ Jimmy Bogard urn:uuid:00cfbf85-dcae-e847-9338-7ca96c791b55 Thu, 16 Nov 2017 13:08:34 +0000 <p>Today I pushed out a small change to the <a href="https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection/">AutoMapper.Extensions.Microsoft.DependencyInjection</a> package to allow instance-based initialization. Before you initialize with <code>services.AddAutoMapper()</code>, set a configuration flag:</p> <pre><code class="language-c#">ServiceCollectionExtensions.UseStaticRegistration = false; services.AddAutoMapper(); </code></pre> <p>By default the extension will register using <code>Mapper.Initialize</code>, but with this flag off, the extension instead</p> <p>Today I pushed out a small change to the <a href="https://www.nuget.org/packages/AutoMapper.Extensions.Microsoft.DependencyInjection/">AutoMapper.Extensions.Microsoft.DependencyInjection</a> package to allow instance-based initialization. Before you initialize with <code>services.AddAutoMapper()</code>, set a configuration flag:</p> <pre><code class="language-c#">ServiceCollectionExtensions.UseStaticRegistration = false; services.AddAutoMapper(); </code></pre> <p>By default the extension will register using <code>Mapper.Initialize</code>, but with this flag off, the extension instead registers an instance.</p> <p><code>Mapper.Initialize</code> is enforced to only be called once, which can be an issue for unit tests that set up the service collection many times.</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=RyyxFoRdYmU:7sSSc9EGPSg:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=RyyxFoRdYmU:7sSSc9EGPSg:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=RyyxFoRdYmU:7sSSc9EGPSg:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=RyyxFoRdYmU:7sSSc9EGPSg:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=RyyxFoRdYmU:7sSSc9EGPSg:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/RyyxFoRdYmU" height="1" width="1" alt=""/> Cleanup Docker http://blog.jasonmeridth.com/posts/cleanup-docker/ Jason Meridth’s Blog urn:uuid:d7b5f532-2b7c-3982-adfc-3b7b3cd1e131 Mon, 13 Nov 2017 03:15:00 +0000 Cleanup Docker <h2 id="cleanup-docker">Cleanup Docker</h2> <p>I keep having friends who have experienced the <code class="highlighter-rouge">no space left on device</code> when trying to build images.</p> <p>I have aliases for most of my container/image/volume cleanup:</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>lias dka='dkc;dki;dkv' alias dkc='docker ps -aq | xargs docker rm -f' alias dki='docker images -aq | xargs docker rmi -f' alias dkv='docker volume ls -qf dangling=true | xargs docker volume rm' </code></pre></div></div> <p>I use <code class="highlighter-rouge">dka</code> all the time.</p> <p>There is also the <code class="highlighter-rouge">docker system prune -a</code> command that works.</p> <p>I’ve also had to unmount my local aufs volume on my ubuntu laptop via:</p> <p><code class="highlighter-rouge">sudo umount -l /var/lib/docker/aufs &amp;&amp; sudo rm -rf /var/lib/docker/aufs</code></p> <p>and all things are cleaned up. Hope this helps someone else.</p> <p>Cheers.</p> AutoMapper 6.2.0 Released https://jimmybogard.com/automapper-6-2-0-released/ Jimmy Bogard urn:uuid:03c5dc80-1fdb-f79e-c2f5-a6c3969ac13d Thu, 09 Nov 2017 13:48:30 +0000 <p>Today I pushed out <a href="https://www.nuget.org/packages/AutoMapper/">AutoMapper 6.2.0</a>. Check out the <a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v6.2.0">release notes</a> for the closed issues.</p> <p>A couple of big features in this release include <a href="http://docs.automapper.org/en/stable/Inline-Mapping.html">inline maps</a>, where AutoMapper no longer requires you to call <code>CreateMap</code> for new maps. I had resisted this idea (and even took out the</p> <p>Today I pushed out <a href="https://www.nuget.org/packages/AutoMapper/">AutoMapper 6.2.0</a>. Check out the <a href="https://github.com/AutoMapper/AutoMapper/releases/tag/v6.2.0">release notes</a> for the closed issues.</p> <p>A couple of big features in this release include <a href="http://docs.automapper.org/en/stable/Inline-Mapping.html">inline maps</a>, where AutoMapper no longer requires you to call <code>CreateMap</code> for new maps. I had resisted this idea (and even took out the <code>Mapper.DynamicMap</code> feature) because I saw these dynamic maps as a bit dangerous. One of the original design guidelines was configuration validation, to make sure I didn't screw up a map.</p> <p>It's bit obvious in hindsight, but I can easily support inline maps with validation by simply validating the single map on first map. You can get the safety of mapping validation with the convenience of inline map creation.</p> <p>Enjoy!</p><div class="feedflare"> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=KFAZtSRVDZI:d-08ThY8aNA:yIl2AUoC8zA"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?d=yIl2AUoC8zA" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=KFAZtSRVDZI:d-08ThY8aNA:V_sGLiPBpWU"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=KFAZtSRVDZI:d-08ThY8aNA:V_sGLiPBpWU" border="0"></img></a> <a href="http://feeds.feedburner.com/~ff/GrabBagOfT?a=KFAZtSRVDZI:d-08ThY8aNA:gIN9vFwOqvQ"><img src="http://feeds.feedburner.com/~ff/GrabBagOfT?i=KFAZtSRVDZI:d-08ThY8aNA:gIN9vFwOqvQ" border="0"></img></a> </div><img src="http://feeds.feedburner.com/~r/GrabBagOfT/~4/KFAZtSRVDZI" height="1" width="1" alt=""/> OnePlus 5 http://blog.jasonmeridth.com/posts/one-plus-5/ Jason Meridth’s Blog urn:uuid:e97f5443-9034-857a-33b5-00f1fe7e6fcc Mon, 03 Jul 2017 17:02:00 +0000 OnePlus 5 <h2 id="oneplus-5">OnePlus 5</h2> <p>I’ve had a Nexus 6 for the last 2 years and was finally due for a phone upgrade. I went through a pretty good fiasco with Google store trying to purchase a Google Pixel XL earlier this year so I decided to wait for the One Plus 5 (release late June). I ordered it the first hour it was announced.</p> <p>Features that I’m loving:</p> <ul> <li>8 GB RAM</li> <li>128 GB hard drive</li> <li>dash charge</li> <li>USB C</li> <li>OxygenOS (Android fork)</li> <li>headphone jack</li> <li>fingerprint authentication</li> <li>dual camera (allows for portrait mode) 16 MP</li> <li>front camera 16 MP</li> <li>Dual SIM support</li> </ul> <p>More detailed specs can be found <a href="https://oneplus.net/5/specs">here</a></p> <p>Features I’m adjusting to:</p> <ul> <li>no Google Phone app install allowed</li> <li>no Google Contacts app install allowed</li> </ul> <p>I’ll adjust to those over time.</p> <p>I did entertain the iPhone 7 for a bit also but am not a fan of iTunes. The iPhone integration with Google apps has gotten much better since the last time I looked though.</p> <p>Cheers.</p> Hello, React! – A Beginner’s Setup Tutorial https://lostechies.com/derekgreer/2017/05/25/hello-react-a-beginners-setup-tutorial/ Los Techies urn:uuid:896513a4-c41d-c8ea-820b-fbc3e2b5a442 Thu, 25 May 2017 08:00:32 +0000 React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process. <p>React has been around for a few years now and there are quite a few tutorials available. Unfortunately, many are outdated, overly complex, or gloss over configuration for getting started. Tutorials which side-step configuration by using jsfiddle or code generator options are great when you’re wanting to just focus on the framework features itself, but many tutorials leave beginners struggling to piece things together when you’re ready to create a simple react application from scratch. This tutorial is intended to help beginners get up and going with React by manually walking through a minimal setup process.</p> <h2 id="a-simple-tutorial">A Simple Tutorial</h2> <p>This tutorial is merely intended to help walk you through the steps to getting a simple React example up and running. When you’re ready to dive into actually learning the React framework, a great list of tutorials can be found <a href="http://andrewhfarmer.com/getting-started-tutorials/">here.</a></p> <p>There are a several build, transpiler, or bundling tools from which to select when working with React. For this tutorial, we’ll be using be using Node, NPM, Webpack, and Babel.</p> <h2 id="step-1-install-node">Step 1: Install Node</h2> <p>Download and install Node for your target platform. Node distributions can be obtained <a href="https://nodejs.org/en/">here</a>.</p> <h2 id="step-2-create-a-project-folder">Step 2: Create a Project Folder</h2> <p>From a command line prompt, create a folder where you plan to develop your example.</p> <pre>$&gt; mkdir hello-react </pre> <h2 id="step-3-initialize-project">Step 3: Initialize Project</h2> <p>Change directory into the example folder and use the Node Package Manager (npm) to initialize the project:</p> <pre>$&gt; cd hello-react $&gt; npm init --yes </pre> <p>This results in the creation of a package.json file. While not technically necessary for this example, creating this file will allow us to persist our packaging and runtime dependencies.</p> <h2 id="step-4-install-react">Step 4: Install React</h2> <p>React is broken up into a core framework package and a package related to rendering to the Document Object Model (DOM).</p> <p>From the hello-react folder, run the following command to install these packages and add them to your package.json file:</p> <pre>$&gt; npm install --save-dev react react-dom </pre> <h2 id="step-5-install-babel">Step 5: Install Babel</h2> <p>Babel is a transpiler, which is to say it’s a tool from converting one language or language version to another. In our case, we’ll be converting EcmaScript 2015 to EcmaScript 5.</p> <p>From the hello-react folder, run the following command to install babel:</p> <pre>$&gt; npm install --save-dev babel-core </pre> <h2 id="step-6-install-webpack">Step 6: Install Webpack</h2> <p>Webpack is a module bundler. We’ll be using it to package all of our scripts into a single script we’ll include in our example Web page.</p> <p>From the hello-react folder, run the following command to install webpack globally:</p> <pre>$&gt; npm install webpack --global </pre> <h2 id="step-7-install-babel-loader">Step 7: Install Babel Loader</h2> <p>Babel loader is a Webpack plugin for using Babel to transpile scripts during the bundling process.</p> <p>From the hello-react folder, run the following command to install babel loader:</p> <pre>$&gt; npm install --save-dev babel-loader </pre> <h2 id="step-8-install-babel-presets">Step 8: Install Babel Presets</h2> <p>Babel presets are collections of plugins needed to support a given feature. For example, the latest version of babel-preset-es2015 at the time this writing will install 24 plugins which enables Babel to transpile ECMAScript 2015 to ECMAScript 5. We’ll be using presets for ES2015 as well as presets for React. The React presets are primarily needed for processing of <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX</a>.</p> <p>From the hello-react folder, run the following command to install the babel presets for both ES2015 and React:</p> <pre>$&gt; npm install --save-dev babel-preset-es2015 babel-preset-react </pre> <h2 id="step-9-configure-babel">Step 9: Configure Babel</h2> <p>In order to tell Babel which presets we want to use when transpiling our scripts, we need to provide a babel config file.</p> <p>Within the hello-react folder, create a file named .babelrc with the following contents:</p> <pre>{ "presets" : ["es2015", "react"] } </pre> <h2 id="step-10-configure-webpack">Step 10: Configure Webpack</h2> <p>In order to tell Webpack we want to use Babel, where our entry point module is, and where we want the output bundle to be created, we need to create a Webpack config file.</p> <p>Within the hello-react folder, create a file named webpack.config.js with the following contents:</p> <pre>const path = require('path'); module.exports = { entry: './app/index.js', output: { path: path.resolve('dist'), filename: 'index_bundle.js' }, module: { rules: [ { test: /\.js$/, loader: 'babel-loader', exclude: /node_modules/ } ] } } </pre> <h2 id="step-11-create-a-react-component">Step 11: Create a React Component</h2> <p>For our example, we’ll just be creating a simple component which renders the text “Hello, React!”.</p> <p>First, create an app sub-folder:</p> <pre>$&gt; mkdir app </pre> <p>Next, create a file named app/index.js with the following content:</p> <pre>import React from 'react'; import ReactDOM from 'react-dom'; class HelloWorld extends React.Component { render() { return ( &lt;div&gt; Hello, React! &lt;/div&gt; ) } }; ReactDOM.render(&lt;HelloWorld /&gt;, document.getElementById('root')); </pre> <p>Briefly, this code includes the react and react-dom modules, defines a HelloWorld class which returns an element containing the text “Hello, React!” expressed using <a href="https://facebook.github.io/react/docs/introducing-jsx.html">JSX syntax</a>, and finally renders an instance of the HelloWorld element (also using JSX syntax) to the DOM.</p> <p>If you’re completely new to React, don’t worry too much about trying to fully understand the code. Once you’ve completed this tutorial and have an example up and running, you can move on to one of the aforementioned tutorials, or work through <a href="https://facebook.github.io/react/docs/hello-world.html">React’s Hello World example</a> to learn more about the syntax used in this example.</p> <div class="note"> <p> Note: In many examples, you will see the following syntax: </p> <pre> var HelloWorld = React.createClass( { render() { return ( &lt;div&gt; Hello, React! &lt;/div&gt; ) } }); </pre> <p> This syntax is how classes were defined in older versions of React and will therefore be what you see in older tutorials. As of React version 15.5.0 use of this syntax will produce the following warning: </p> <p style="color: red"> Warning: HelloWorld: React.createClass is deprecated and will be removed in version 16. Use plain JavaScript classes instead. If you&#8217;re not yet ready to migrate, create-react-class is available on npm as a drop-in replacement. </p> </div> <h2 id="step-12-create-a-webpage">Step 12: Create a Webpage</h2> <p>Next, we’ll create a simple html file which includes the bundled output defined in step 10 and declare a &lt;div&gt; element with the id “root” which is used by our react source in step 11 to render our HelloWorld component.</p> <p>Within the hello-react folder, create a file named index.html with the following contents:</p> <pre>&lt;html&gt; &lt;div id="root"&gt;&lt;/div&gt; &lt;script src="./dist/index_bundle.js"&gt;&lt;/script&gt; &lt;/html&gt; </pre> <h2 id="step-13-bundle-the-application">Step 13: Bundle the Application</h2> <p>To convert our app/index.js source to ECMAScript 5 and bundle it with the react and react-dom modules we’ve included, we simply need to execute webpack.</p> <p>Within the hello-react folder, run the following command to create the dist/index_bundle.js file reference by our index.html file:</p> <pre>$&gt; webpack </pre> <h2 id="step-14-run-the-example">Step 14: Run the Example</h2> <p>Using a browser, open up the index.html file. If you’ve followed all the steps correctly, you should see the following text displayed:</p> <pre>Hello, React! </pre> <h2 id="conclusion">Conclusion</h2> <p>Congratulations! After completing this tutorial, you should have a pretty good idea about the steps involved in getting a basic React app up and going. Hopefully this will save some absolute beginners from spending too much time trying to piece these steps together.</p> Up into the Swarm https://lostechies.com/gabrielschenker/2017/04/08/up-into-the-swarm/ Los Techies urn:uuid:844f7b20-25e5-e658-64f4-e4d5f0adf614 Sat, 08 Apr 2017 20:59:26 +0000 Last Thursday evening I had the opportunity to give a presentation at the Docker Meetup in Austin TX about how to containerize a Node JS application and deploy it into a Docker Swarm. I also demonstrated techniques that can be used to reduce friction in the development process when using containers. <p>Last Thursday evening I had the opportunity to give a presentation at the Docker Meetup in Austin TX about how to containerize a Node JS application and deploy it into a Docker Swarm. I also demonstrated techniques that can be used to reduce friction in the development process when using containers.</p> <p>The meeting was recorded but unfortunately sound only is available after approximately 16 minutes. You might want to just scroll forward to this point.</p> <p>Video: <a href="https://youtu.be/g786WiS5O8A">https://youtu.be/g786WiS5O8A</a></p> <p>Slides and code: <a href="https://github.com/gnschenker/pets-node">https://github.com/gnschenker/pets-node</a></p> New Year, New Blog https://lostechies.com/jimmybogard/2017/01/26/new-year-new-blog/ Los Techies urn:uuid:447d30cd-e297-a888-7ccc-08c46f5a1688 Thu, 26 Jan 2017 03:39:05 +0000 One of my resolutions this year was to take ownership of my digital content, and as such, I’ve launched a new blog at jimmybogard.com. I’m keeping all my existing content on Los Techies, where I’ve been humbled to be a part of for the past almost 10 years. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it’s been a great ride. <p>One of my resolutions this year was to take ownership of my digital content, and as such, I’ve launched a new blog at <a href="https://jimmybogard.com/">jimmybogard.com</a>. I’m keeping all my existing content on <a href="https://jimmybogard.lostechies.com/">Los Techies</a>, where I’ve been humbled to be a part of for the past <a href="http://grabbagoft.blogspot.com/2007/11/joining-los-techies.html">almost 10 years</a>. Hundreds of posts, thousands of comments, and innumerable wrong opinions on software and systems, it’s been a great ride.</p> <p>If you’re still subscribed to my FeedBurner feed – nothing to change, you’ll get everything as it should. If you’re only subscribed to the Los Techies feed…well you’ll need to <a href="http://feeds.feedburner.com/GrabBagOfT">subscribe to my feed</a> now.</p> <p>Big thanks to everyone at Los Techies that’s put up with me over the years, especially our site admin <a href="https://jasonmeridth.com/">Jason</a>, who has become far more knowledgable about WordPress than he ever probably wanted.</p> Containers – Cleanup your house revisited https://lostechies.com/gabrielschenker/2016/12/12/containers-cleanup-your-house-revisited/ Los Techies urn:uuid:696d80e9-3827-df0d-6fdf-ab1f51274e7b Mon, 12 Dec 2016 21:10:02 +0000 In version 1.13 Docker has added some useful commands to the CLI that make it easier to keep our environment clean. As you might have experienced yourself over time our development environment gets really cluttered with unused containers, dangling Docker images, abandoned volumes and forgotten networks. All these obsolete items take aways precious resources and ultimately lead to an unusable environment. In a previous post I have shown how we can keep our house clean by using various commands like <p>In version 1.13 Docker has added some useful commands to the CLI that make it easier to keep our environment clean. As you might have experienced yourself over time our development environment gets really cluttered with unused containers, dangling Docker images, abandoned volumes and forgotten networks. All these obsolete items take aways precious resources and ultimately lead to an unusable environment. In a <a href="https://lostechies.com/gabrielschenker/2016/08/14/containers-clean-up-your-house/">previous post</a> I have shown how we can keep our house clean by using various commands like</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker rm -f $(docker ps -aq) </code></pre></div></div> <p>to forcibly remove all running, stopped and terminated containers. Similarly we learned commands that allowed us to remove dangling images, networks and volumes.</p> <p>Although the commands I described solved the problem they were proprietary, verbose or difficult to remember. The new commands introduced are straight forward and easy to use. Let’s try them out.</p> <blockquote> <p>If you like this article then you can find more posts about containers in general and Docker in specific in <a href="https://lostechies.com/gabrielschenker/2016/08/26/containers-an-index/">this</a> table of content.</p> </blockquote> <h1 id="management-commands">Management Commands</h1> <p>To un-clutter the CLI a bit Docker 1.13 introduces new management commands. The list of those are</p> <ul> <li>system</li> <li>container</li> <li>image</li> <li>plugin</li> <li>secret</li> </ul> <p>Older versions of Docker already had <code class="highlighter-rouge">network, node, service, swarm</code> and <code class="highlighter-rouge">volume</code>.</p> <p>These new commands group subcommands that were previously directly implemented as root commands. Let me give an example</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker exec -it [container-name] [some-command] </code></pre></div></div> <p>The <code class="highlighter-rouge">exec</code> command is now a subcommand under <code class="highlighter-rouge">container</code>. Thus the equivalent of the above command is</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container exec -it [container-name] [some-command] </code></pre></div></div> <p>I would assume that for reasons of backwards compatibility the old syntax will stick around with us for the time being.</p> <h1 id="docker-system">Docker System</h1> <p>There is a new management command <code class="highlighter-rouge">system</code>. It has 4 possible subcommands <code class="highlighter-rouge">df, events, info</code> and <code class="highlighter-rouge">prune</code>. The command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker system df </code></pre></div></div> <p>gives us an overview of the overall disk usage of Docker. This include images, containers and (local) volumes. So we can now at any time stay informed about how much resources Docker consumes.</p> <p>If the previous command shows us that we’re using too much space we might as well start to cleanup. Our next command does exactly that. It is a <em>do-it-all</em> type of command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker system prune </code></pre></div></div> <p>This command removes everything that is currently not used, and it does it in the correct sequence so that a maximum outcome is achieved. First unused containers are removed, then volumes and networks and finally dangling images. We have to confirm the operation though by answering with <code class="highlighter-rouge">y</code>. If we want to use this command in a script we can use the parameter <code class="highlighter-rouge">--force</code> or <code class="highlighter-rouge">-f</code> to instruct Docker not to ask for confirmation.</p> <h1 id="docker-container">Docker Container</h1> <p>We already know many of the subcommands of <code class="highlighter-rouge">docker container</code>. They were previously (and still are) direct subcommands of <code class="highlighter-rouge">docker</code>. We can get the full list of subcommands like this</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container --help </code></pre></div></div> <p>In the list we find again a <code class="highlighter-rouge">prune</code> command. If we use it we only remove unused containers. Thus the command is much more limited than the <code class="highlighter-rouge">docker system prune</code> command that we introduced in the previous section. Using the <code class="highlighter-rouge">--force</code> or <code class="highlighter-rouge">-f</code> flag we can again instruct the CLI not to ask us for confirmation</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker container prune --force </code></pre></div></div> <h1 id="docker-network">Docker Network</h1> <p>As you might expect, we now also have a <code class="highlighter-rouge">prune</code> command here.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker network prune </code></pre></div></div> <p>removes all orphaned networks</p> <h1 id="docker-volume">Docker Volume</h1> <p>And once again we find a new <code class="highlighter-rouge">prune</code> command for volumes too.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker volume prune </code></pre></div></div> <p>removes all (local) volumes that are not used by at least one container.</p> <h1 id="docker-image">Docker Image</h1> <p>Finally we have the new image command which of course gets a <code class="highlighter-rouge">prune</code> subcommand too. We have the flag <code class="highlighter-rouge">--force</code> that does the same job as in the other samples and we have a flag <code class="highlighter-rouge">--all</code> that not just removes dangling images but all unused ones. Thus</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker image prune --force --all </code></pre></div></div> <p>removes all images that are unused and does not ask us for confirmation.</p> <h1 id="summary">Summary</h1> <p>Not only has Docker v 1.13 brought some needed order into the zoo of Docker commands by introducing so called admin commands but also we find some very helpful commands to clean up our environment from orphaned items. My favorite command will most probably be the <code class="highlighter-rouge">docker system prune</code> as I always like an uncluttered environment.</p> Dealing with Duplication in MediatR Handlers https://lostechies.com/jimmybogard/2016/12/12/dealing-with-duplication-in-mediatr-handlers/ Los Techies urn:uuid:753c0ce6-0420-943b-c09b-26c660ba2565 Mon, 12 Dec 2016 20:37:57 +0000 We’ve been using MediatR (or some manifestation of it) for a number of years now, and one issue that comes up frequently is “how do I deal with duplication”. In a traditional DDD n-tier architecture, you had: <p>We’ve been using MediatR (or some manifestation of it) for a number of years now, and one issue that comes up frequently is “how do I deal with duplication”. In a traditional DDD n-tier architecture, you had:</p> <ul> <li>Controller</li> <li>Service</li> <li>Repository</li> <li>Domain</li> </ul> <p>It was rather easy to share logic in a service class for business logic, or a repository for data logic (queries, etc.) When it comes to building apps using CQRS and MediatR, we remove these layer types (Service and Repository) in favor of request/response pairs that line up 1-to-1 with distinct external requests. It’s a variation of the <a href="http://alistair.cockburn.us/Hexagonal+architecture">Ports and Adapters</a> pattern from Hexagonal Architecture.</p> <p>Recently, going through an exercise with a client where we collapsed a large project structure and replaced the layers with commands, queries, and MediatR handlers brought this issue to the forefront. Our approaches for tackling this duplication will highly depend on what the handler is actually doing. As we saw in the previous post on <a href="https://lostechies.com/jimmybogard/2016/10/27/cqrsmediatr-implementation-patterns/">CQRS/MediatR implementation patterns</a>, our handlers can do whatever we like. Stored procedures, event sourcing, anything. Typically my handlers fall in the “procedural C# code” category. I have domain entities, but my handler is just dumb procedural logic.</p> <h3 id="starting-simple">Starting simple</h3> <p>Regardless of my refactoring approach, I ALWAYS start with the simplest handler that could possibly work. This is the “green” step in TDD’s “Red Green Refactor” step. Create a handler test, get the test to pass in the simplest means possible. This means the pattern I choose is a <a href="http://martinfowler.com/eaaCatalog/transactionScript.html">Transaction Script</a>. Procedural code, the simplest thing possible.</p> <p>Once I have my handler written and my test passes, then the real fun begins, the Refactor step!</p> <p><strong>WARNING: Do not skip the refactoring step</strong></p> <p>At this point, I start with just my handler and the code smells it exhibits. <a href="https://martinfowler.com/bliki/CodeSmell.html">Code smells</a> as a reminder are indication that the code COULD exhibit a problem and MIGHT need refactoring, but is worth a decision to refactor (or not). Typically, I won’t hit duplication code smells at this point, it’ll be just standard code smells like:</p> <ul> <li>Large Class</li> <li>Long Method</li> </ul> <p>Those are pretty straightforward refactorings, you can use:</p> <ul> <li>Extract Class</li> <li>Extract Subclass</li> <li>Extract Interface</li> <li>Extract Method</li> <li>Replace Method with Method Object</li> <li>Compose Method</li> </ul> <p>I generally start with these to make my handler make more sense, easier to understand and the like. Past that, I start looking at more behavioral smells:</p> <ul> <li>Combinatorial Explosion</li> <li>Conditional Complexity</li> <li>Feature Envy</li> <li>Inappropriate Intimacy</li> <li>and finally, Duplicated Code</li> </ul> <p>Because I’m freed of any sort of layer objects, I can choose whatever refactoring makes most sense.</p> <h3 id="dealing-with-duplication">Dealing with Duplication</h3> <p>If I’m in a DDD state of mind, my refactorings in my handlers tend to be as I would have done for years, as I laid out in my (still relevant) blog post on <a href="https://lostechies.com/jimmybogard/2010/02/04/strengthening-your-domain-a-primer/">strengthening your domain</a>. But that doesn’t really address duplication.</p> <p>In my handlers, duplication tends to come in a couple of flavors:</p> <ul> <li>Behavioral duplication</li> <li>Data access duplication</li> </ul> <p>Basically, the code duplicated either accesses a DbContext or other ORM thing, or it doesn’t. One approach I’ve seen for either duplication is to have common query/command handlers, so that my handler calls MediatR or some other handler.</p> <p>I’m not a fan of this approach – it gets quite confusing. Instead, I want MediatR to serve as the outermost window into the actual domain-specific behavior in my application:</p> <p><a href="https://lostechies.com/jimmybogard/files/2016/12/image.png"><img class="alignnone size-full wp-image-1255" src="https://lostechies.com/jimmybogard/files/2016/12/image.png" alt="" width="324" height="237" /></a></p> <p>Excluding sub-handlers or delegating handlers, where should my logic go? Several options are now available to me:</p> <ul> <li>Its own class (named appropriately)</li> <li>Domain service (as was its original purpose in the DDD book)</li> <li>Base handler class</li> <li>Extension method</li> <li>Method on my DbContext</li> <li>Method on my aggregate root/entity</li> </ul> <p>As to which one is most appropriate, it naturally depends on what the duplicated code is actually doing. Common query? Method on the DbContext or an extension method to IQueryable or DbSet. Domain behavior? Method on your domain model or perhaps a domain service. There’s a lot of options here, it really just depends on what’s duplicated and where those duplications lie. If the duplication is within a feature folder, a base handler class for that feature folder would be a good idea.</p> <p>In the end, I don’t really prefer any approach to the another. There are tradeoffs with any approach, and I try as much as possible to let the nature of the duplication to guide me to the correct solution.</p> Docker and Swarmkit – Part 6 – New Features of v1.13 https://lostechies.com/gabrielschenker/2016/11/25/docker-and-swarmkit-part-6-new-features-of-v1-13/ Los Techies urn:uuid:3667b5ea-bb17-4785-9d61-7113b454f86e Fri, 25 Nov 2016 15:14:15 +0000 In a few days version 1.13 of Docker will be released and among other it contains a lot of new features for the Docker Swarmkit. In this post I want to explore some of these new capabilities. <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/docker-swarm-logo.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/docker-swarm-logo.png" alt="" title="docker-swarm-logo" width="250" height="342" class="alignleft size-full wp-image-2090" /></a> In a few days version 1.13 of Docker will be released and among other it contains a lot of new features for the Docker Swarmkit. In this post I want to explore some of these new capabilities.</p> <p>In the last few parts of this series of posts about the Docker Swarmkit we have used version 1.12.x of Docker. You can find those post here</p> <p><a href="https://lostechies.com/gabrielschenker/2016/09/05/docker-and-swarm-mode-part-1/">Part 1</a>, <a href="https://lostechies.com/gabrielschenker/2016/09/11/docker-and-swarm-mode-part-2/">Part 2</a>, <a href="https://lostechies.com/gabrielschenker/2016/10/05/docker-and-swarm-mode-part-3/">Part 3</a>, <a href="https://lostechies.com/gabrielschenker/2016/10/22/docker-and-swarmkit-part-4/">Part 4</a> and <a href="https://lostechies.com/gabrielschenker/2016/11/11/docker-and-swarmkit-part-5-going-deep/">Part 5</a></p> <p>For a full index of all Docker related post please refer to <a href="https://lostechies.com/gabrielschenker/2016/08/26/containers-an-index/">this post</a></p> <h1 id="preparating-for-version-113">Preparating for Version 1.13</h1> <p>First we need to prepare our system to use Docker 1.13. I will be using VirtualBox and the Boot2Docker ISO to demonstrate the new features. This is what I have done to get going. Note that at the time of this writing Docker just <a href="https://github.com/docker/docker/releases">released Docker v1.13 rc2</a>.</p> <p>First I am going to install the newest version of <code class="highlighter-rouge">docker-machine</code> on my Mac. The binaries can be downloaded from <a href="https://github.com/docker/machine/releases">here</a>. In my case the package I download is <a href="https://github.com/docker/machine/releases/download/v0.9.0-rc1/docker-machine-Darwin-x86_64">docker-machine-Darwin-x86_64 v0.9.0-rc1</a></p> <p>From the download folder move the binaries to the target folder</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mv ~/Downloads/docker-machine-Darwin-x86_64 /usr/local/bin/docker-machine </code></pre></div></div> <p>and then make it executable</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>chmod +x /usr/local/bin/docker-machine </code></pre></div></div> <p>finally we can double check that we have the expected version</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine -v </code></pre></div></div> <p>and in my case I see this</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine version 0.9.0-rc1, build ed849a7 </code></pre></div></div> <p>Now let’s download the newest <code class="highlighter-rouge">boot2docker.iso</code> image. At the time of this writing it is v1.13rc2. We can get it from <a href="https://github.com/boot2docker/boot2docker/releases">here</a>. Once downloaded move the image to the correct location</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mv ~/Downloads/boot2docker.iso ~/.docker/machine/cache/ </code></pre></div></div> <p>And we’re ready to go…</p> <h1 id="creating-a-docker-swarm">Creating a Docker Swarm</h1> <h2 id="preparing-the-nodes">Preparing the Nodes</h2> <p>Now we can create a new swarm with Docker at version 1.13. We use the very same approach as described in <a href="">part x</a> of this series. Please read that post for more details.</p> <p>Let’s clean up any pre-existing nodes called node1, node2, …, noneX with e.g. the following command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>for n in $(seq 1 5); do docker-machine rm node$n done; </code></pre></div></div> <p>and then we create 5 new nodes with Docker version 1.13rc2</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>for n in $(seq 1 5); do docker-machine create --driver virtualbox node$n done; </code></pre></div></div> <p>Once this is done (takes about 2 minutes or so) we can double check the result</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ls </code></pre></div></div> <p>which in my case shows this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/list-of-nodes.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/list-of-nodes.png" alt="" title="list-of-nodes" width="795" height="181" class="alignnone size-full wp-image-2065" /></a></p> <p>Now we can SSH into <code class="highlighter-rouge">node1</code></p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node1 </code></pre></div></div> <p>and we should see this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/boot2docker.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/boot2docker.png" alt="" title="boot2docker" width="687" height="408" class="alignnone size-full wp-image-2063" /></a></p> <p>and indeed, we are now having a Docker host running at version 1.13.0-rc2.</p> <h2 id="creating-the-swarm">Creating the Swarm</h2> <p>Now lets first initialize a swarm. <code class="highlighter-rouge">node1</code> will be the leader and <code class="highlighter-rouge">node2</code> and <code class="highlighter-rouge">node3</code> will be additional master nodes whilst <code class="highlighter-rouge">node4</code> and <code class="highlighter-rouge">node5</code> will be worker nodes (<em>Make sure you are in a terminal on your Mac</em>).</p> <p>First let’s get the IP address of the future swarm leader</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export leader_ip=`docker-machine ip node1` </code></pre></div></div> <p>Then we can initialize the swarm</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node1 docker swarm init --advertise-addr $leader_ip </code></pre></div></div> <p>Now let’s get the swarm join token for a worker node</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export token=`docker-machine ssh node1 docker swarm join-token worker -q` </code></pre></div></div> <p>We can now use this token to have the other 4 nodes join as worker nodes</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>for n in $(seq 2 5); do docker-machine ssh node$n docker swarm join --token $token $leader_ip:2377 done; </code></pre></div></div> <p>what we should see is this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/workers-joining-swarm.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/workers-joining-swarm.png" alt="" title="workers-joining-swarm" width="676" height="174" class="alignnone size-full wp-image-2067" /></a></p> <p>Let’s promote nodes 2 and 3 to masters</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node1 docker node promote node2 node3 </code></pre></div></div> <p>And to make sure everything is as expected we can list all nodes on the leader</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node1 node ls </code></pre></div></div> <p>In my case I see this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/swarm-ready.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/swarm-ready.png" alt="" title="swarm-ready" width="617" height="168" class="alignnone size-full wp-image-2069" /></a></p> <h2 id="adding-everything-to-one-script">Adding everything to one script</h2> <p>We can now aggregate all snippets into one single script which makes it really easy in the future to create a swarm from scratch</p> <script src="https://gist.github.com/daf505d896b135ccfbbf7507ef653553.js"> </script> <h1 id="analyzing-the-new-features">Analyzing the new Features</h1> <h2 id="secrets">Secrets</h2> <p>One of the probably most requested features is support for secrets managed by the swarm. Docker supports a new command <code class="highlighter-rouge">secret</code> for this. We can create, remove, inspect and list secrets in the swarm. Let’s try to create a new secret</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>echo '1admin2' | docker secret create 'MYSQL_PASSWORD' </code></pre></div></div> <p>The value/content of a secret is provided via <code class="highlighter-rouge">stdin</code>. In this case we pipe it into the command.</p> <p>When we run a service we can map secrets into the container using the <code class="highlighter-rouge">--secret</code> flag. Each secret is mapped as a file into the container at <code class="highlighter-rouge">/run/secrets</code>. Thus, if we run a service like this</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service create --name mysql --secret MYSQL_PASSWORD \ mysql:latest ls /run/secrets </code></pre></div></div> <p>and then observe the logs of the service (details on how to use logs see below)</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service logs mysql </code></pre></div></div> <p>we should see this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/mounted-secrets.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/mounted-secrets.png" alt="" title="mounted-secrets" width="400" height="66" class="alignnone size-full wp-image-2082" /></a></p> <p>The content of each file corresponds to the value of the secret.</p> <h2 id="publish-a-port">Publish a Port</h2> <p>When creating an new service and want to publish a port we can now instead of only using the somewhat condensed <code class="highlighter-rouge">--publish</code> flag use the new <code class="highlighter-rouge">--port</code> flag which uses a more descriptive syntax (also called ‘csv’ syntax)</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service create --name nginx --port mode=ingress,target=80,published=8080,protocol=tcp nginx </code></pre></div></div> <p>In my opinion, altough the syntax is more verbous it makes things less confusing. Often people with the old syntax forgot in which order the target and the published port have to be declard. Now it is evident without having to consult the documentation each time.</p> <h2 id="attachable-network-support">Attachable Network support</h2> <p>Previously it was not possible for containers that were run in classical mode (via <code class="highlighter-rouge">docker run ...</code>) to run on the same network as a service. With version 1.13 Docker has introduced the flag <code class="highlighter-rouge">--attachable</code> to the <code class="highlighter-rouge">network create</code> command. This will allow us to run services and individual containers on the same network. Let’s try that and create such a network called <code class="highlighter-rouge">web</code></p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker network create --attachable --driver overlay web </code></pre></div></div> <p>and let’s run Nginx on as a service on this network</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service create --name nginx --network web nginx:latest </code></pre></div></div> <p>and then we run a conventional container on this network that tries to acces the Nginx service. First we run it without attaching it to the <code class="highlighter-rouge">web</code> network</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run --rm -it appropriate/curl nginx </code></pre></div></div> <p>and the result is as expected, a failure</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/attachable-network-fail1.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/attachable-network-fail1.png" alt="" title="attachable-network-fail" width="653" height="201" class="alignnone size-full wp-image-2077" /></a></p> <p>And now let’s try the same again but this time we attach the container to the <code class="highlighter-rouge">web</code> network</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run --rm -it --network web appropriate/curl nginx:8080 </code></pre></div></div> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/attachable-network-success.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/attachable-network-success.png" alt="" title="attachable-network-success" width="625" height="529" class="alignnone size-full wp-image-2080" /></a></p> <h2 id="run-docker-deamon-in-experimental-mode">Run Docker Deamon in experimental mode</h2> <p>In version 1.13 the experimental features are now part of the standard binaries and can be enabled by running the Deamon with the <code class="highlighter-rouge">--experimental</code> flag. Let’s do just this. First we need to change the <code class="highlighter-rouge">dockerd</code> profile and add the flag</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node-1 -t sudo vi /var/lib/boot2docker/profile </code></pre></div></div> <p>add the <code class="highlighter-rouge">--experimental</code> flag to the <code class="highlighter-rouge">EXTRA_ARGS</code> variable. In my case the file looks like this after the modification</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>EXTRA_ARGS=' --label provider=virtualbox --experimental ' CACERT=/var/lib/boot2docker/ca.pem DOCKER_HOST='-H tcp://0.0.0.0:2376' DOCKER_STORAGE=aufs DOCKER_TLS=auto SERVERKEY=/var/lib/boot2docker/server-key.pem SERVERCERT=/var/lib/boot2docker/server.pem </code></pre></div></div> <p>Save the changes as reboot the leader node</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine stop node-1 docker-machine start node-1 </code></pre></div></div> <p>After the node is ready SSH into it</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node-1 </code></pre></div></div> <h2 id="aggregated-logs-of-a-service-experimental">Aggregated logs of a service (experimental!)</h2> <p>In this release we can now easily get the aggregated logs of all tasks of a given service in a swarm. That is neat. Lets quickly try that. First we need to run Docker in experimental mode on the node where we execute all commands. Just follow the steps in the previous section.</p> <p>Now lets create a sample service and run 3 instances (tasks) of it. We will be using Redis in this particular case, but any other service should work.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service create --name Redis --replicas 3 redis:latest </code></pre></div></div> <p>after giving the service some time to initialize and run the tasks we can now output the aggregated log</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker service logs redis </code></pre></div></div> <p>and we should see something like this (I am just showing the first few lines)</p> <script src="https://gist.github.com/2eeeddd78c14a92247ee96fbcbbf10ea.js"> </script> <p>We can clearly see how the output is aggregated from the 3 tasks running on nodes 3, 4 and 5. This is a huge improvement IMHO and I can’t wait until it is part of the stable release.</p> <h1 id="summary">Summary</h1> <p>In this post we have created a Docker swarm on VirtualBox using the new version 1.13.0-rc2 of Docker. This new release offers many new and exciting features. In this post I have concentrated on some of the features concerning the Swarmkit. My post is getting too long and I have still so many interesting new features to explore. I will do that in my next post. Stay tuned.</p> Docker and SwarmKit – Part 5 – going deep https://lostechies.com/gabrielschenker/2016/11/11/docker-and-swarmkit-part-5-going-deep/ Los Techies urn:uuid:fb65b412-f15d-8c04-2d58-6cee571df584 Fri, 11 Nov 2016 11:52:39 +0000 In this post we will work with the SwarmKit directly and not use the Docker CLI to access it. For that we have to first build the necessary components from source which we can find on GitHub. <p>In this post we will work with the SwarmKit directly and not use the Docker CLI to access it. For that we have to first build the necessary components from source which we can find on <a href="https://github.com/docker/swarmkit">GitHub</a>.</p> <p>You can find the links to the previous 4 parts of this series <a href="https://lostechies.com/gabrielschenker/2016/08/26/containers-an-index/">here</a>. There you will also find links to my other container related posts.</p> <h1 id="build-the-infrastructure">Build the infrastructure</h1> <p>Once again we will use VirtualBox to create a few virtual machines will be the members of our cluster. First make sure that you have no existing VM called <code class="highlighter-rouge">nodeX</code> where X is a number between 1 and 5. Otherwise used <code class="highlighter-rouge">docker-machine rm nodeX</code> to remove the corresponding nodes. Once we’re ready to go lets build 5 VMs with this command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>for n in $(seq 1 5); do docker-machine create --driver VirtualBox node$n done; </code></pre></div></div> <p>As always buildling the infrastructure is the most time consuming task by far. On my laptop the above command takes a couple of minutes. The equivalent on say AWS or Azure would also take a few minutes.</p> <blockquote> <p>Luckily we don’t have to do that very often. On the other hand, what I just said sounds a bit silly if you’re an oldie like me. I still remember the days when we had to wait weeks to get a new VM or even worse months to get a new physical server. So, we are totally spoiled. (Rant)</p> </blockquote> <p>Once the VMs are built use</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ls </code></pre></div></div> <p>to verify that all machines are up and running as expected</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/list-of-vms.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/list-of-vms.png" alt="" title="list-of-vms" width="766" height="154" class="alignnone size-full wp-image-2030" /></a></p> <h1 id="build-swarmkit-binaries">Build SwarmKit Binaries</h1> <p>To build the binaries of the SwarmKit we can either use an existing GO environment on our Laptop and follow the instructions <a href="https://github.com/docker/swarmkit/blob/master/BUILDING.md">here</a> or use the <a href="https://hub.docker.com/_/golang/">golang</a> Docker container to build the binaries inside a container without the need to have GO natively installed</p> <p>We can SSH into <code class="highlighter-rouge">node1</code> which later should become the leader of the swarm.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ssh node1 </code></pre></div></div> <p>On our leader we first create a new directory, e.g.</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir /swarmkit </code></pre></div></div> <p>now cd into the <code class="highlighter-rouge">swarmkit</code> folder</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cd swarmkit </code></pre></div></div> <p>we then clone the source from GitHub using Go</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run --rm -t -v $(pwd):/go golang:1.7 go get -d github.com/docker/swarmkit </code></pre></div></div> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/cloning-source.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/cloning-source.png" alt="" title="cloning-source" width="882" height="226" class="alignnone size-full wp-image-2032" /></a></p> <p>this will put the source under the directory <code class="highlighter-rouge">/go/src/github.com/docker/swarmkit</code>. Finally we can build the binaries, again using the Go container</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker run --rm -t \ -v $(pwd):/go \ -w /go/src/github.com/docker/swarmkit \ golang:1.7 bash -c "make binaries" </code></pre></div></div> <p>We should see something like this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/build-source.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/build-source.png" alt="" title="build-source" width="376" height="202" class="alignnone size-full wp-image-2034" /></a></p> <p>and voila, you should find the binaries in the subfolder <code class="highlighter-rouge">bin</code> of the swarmkit folder.</p> <h1 id="using-the-swarmctl-utility">Using the SwarmCtl Utility</h1> <p>To make the <code class="highlighter-rouge">swarmd</code> and <code class="highlighter-rouge">swarmctl</code> available everywhere we can create a symlink to these two binaries into the <code class="highlighter-rouge">/usr/bin</code> folder</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo ln -s ~/swarmkit/src/github.com/docker/swarmkit/bin/swarmd /usr/bin/swarmd sudo ln -s ~/swarmkit/src/github.com/docker/swarmkit/bin/swarmctl /usr/bin/swarmctl </code></pre></div></div> <p>now we can test the tool by entering</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl version </code></pre></div></div> <p>and we should see something along the lines of</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl github.com/docker/swarmkit v1.12.0-714-gefd44df </code></pre></div></div> <h1 id="create-a-swarm">Create a Swarm</h1> <h2 id="initializing-the-swarm">Initializing the Swarm</h2> <p>Similar to what we were doing in <a href="https://lostechies.com/gabrielschenker/2016/09/05/docker-and-swarm-mode-part-1/">part 1</a> we need to first initialize a swarm. Still logged in to <code class="highlighter-rouge">node</code> we can execute this command to do so</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmd -d /tmp/node1 --listen-control-api /tmp/node1/swarm.sock --hostname node1 </code></pre></div></div> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/init-swarm.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/init-swarm.png" alt="" title="init-swarm" width="932" height="256" class="alignnone size-full wp-image-2038" /></a></p> <p>Let’s open a new <code class="highlighter-rouge">ssh</code> session to <code class="highlighter-rouge">node1</code> and assign the socket to the swarm to the environment variable <code class="highlighter-rouge">SWARM_SOCKET</code></p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export SWARM_SOCKET=/tmp/node1/swarm.sock </code></pre></div></div> <p>Now we can use the <code class="highlighter-rouge">swarmctl</code> to inspect the swarm</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl cluster inspect default </code></pre></div></div> <p>and we should see something along the line of</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/inspect-swarm.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/inspect-swarm.png" alt="" title="inspect-swarm" width="807" height="237" class="alignnone size-full wp-image-2040" /></a></p> <p>Please note the two swarm tokens that we see at the end of the output above. We will be using those tokens to join the other VMs (we call them nodes) to the swarm either as master or as worker nodes. We have a token for each role.</p> <h2 id="copy-swarmkit-binaries">Copy Swarmkit Binaries</h2> <p>To copy the swarm binaries (swarmctl and swarmd) to all the other nodes we can use this command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> for n in $(seq 2 5); do docker-machine scp node1:swarmkit/src/github.com/docker/swarmkit/bin/swarmd node$n:/home/docker/ docker-machine scp node1:swarmkit/src/github.com/docker/swarmkit/bin/swarmctl node$n:/home/docker/ done; </code></pre></div></div> <h2 id="joining-worker-nodes">Joining Worker Nodes</h2> <p>Now let’s <code class="highlighter-rouge">ssh</code> into e.g. node2 and join it to the cluster as a worker node</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./swarmd -d /tmp/node2 --hostname node2 --join-addr 192.168.99.100:4242 --join-token &lt;Worker Token&gt; </code></pre></div></div> <p>In my case the <code class="highlighter-rouge">&lt;Worker Token&gt;</code> is <code class="highlighter-rouge">SWMTKN-1-4jz8msqzu2nwz7c0gtmw7xvfl80wmg2gfei3bzpzg7edlljeh3-285metdzg17jztsflhg0umde8</code>. The <code class="highlighter-rouge">join-addr</code> is the IP address of <code class="highlighter-rouge">node1</code> of your setup. You can get it via</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>docker-machine ip node </code></pre></div></div> <p>in my case it is <code class="highlighter-rouge">192.168.99.100</code>.</p> <p>Repeat the same for <code class="highlighter-rouge">node3</code>. Make sure to replace <code class="highlighter-rouge">node2</code> with <code class="highlighter-rouge">node3</code> in the join command.</p> <p>On <code class="highlighter-rouge">node1</code> we can now execute the command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl node ls </code></pre></div></div> <p>and should see something like this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/list-swarm-nodes.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/list-swarm-nodes.png" alt="" title="list-swarm-nodes" width="671" height="146" class="alignnone size-full wp-image-2045" /></a></p> <p>As you can see, we now have a cluster of 3 nodes with one master (node1) and two workers (node2 and node3). Please join the remaining two nodes 4 and 5 with the same approach as above.</p> <h2 id="creating-services">Creating Services</h2> <p>Having a swarm we can now create services and update them using the <code class="highlighter-rouge">swarmctl</code> binary. Let’s create a service using the <code class="highlighter-rouge">nginx</code> image</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl service create --name nginx --image nginx:latest </code></pre></div></div> <p>This will create the service and run one container instance on a node of our cluster. We can use</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl service ls </code></pre></div></div> <p>to list all our services that are defined for this cluster. We should see something like this</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/list-services.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/list-services.png" alt="" title="list-services" width="459" height="86" class="alignnone size-full wp-image-2047" /></a></p> <p>If we want to see more specific information about a particular service we can use the <code class="highlighter-rouge">inspect</code> command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl service inspect nginx </code></pre></div></div> <p>and should get a much more detailed output.</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/inspect-service.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/inspect-service.png" alt="" title="inspect-service" width="916" height="233" class="alignnone size-full wp-image-2049" /></a></p> <p>We can see a lot of details in the above output. I want to specifically point out the column <code class="highlighter-rouge">Node</code> which tells us on which node the nginx container is running. In my case it is <code class="highlighter-rouge">node2</code>.</p> <p>Now if we want to scale this service we can use the <code class="highlighter-rouge">update</code> command</p> <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>swarmctl service update nginx --replicas 2 </code></pre></div></div> <p>after a short moment (needed to download the image on the remaining node) we should see this when executing the <code class="highlighter-rouge">inspect</code> command again</p> <p><a href="https://lostechies.com/gabrielschenker/files/2016/11/scale-service.png"><img src="https://lostechies.com/gabrielschenker/files/2016/11/scale-service.png" alt="" title="scale-service" width="901" height="303" class="alignnone size-full wp-image-2052" /></a></p> <p>As expected nginx is now running on two nodes of our cluster.</p> <h1 id="summary">Summary</h1> <p>In this part we have used the Docker swarmkit directly to create a swarm and define and run services on this cluster. In the previous posts of this series we have used the Docker CLI to execute the same tasks. But under the hood the CLI just calls or uses the <code class="highlighter-rouge">swarmd</code> and <code class="highlighter-rouge">swarmctl</code> binaries.</p> <p>If you are interested in more articles about containers in general and Docker in specific please refer to <a href="https://lostechies.com/gabrielschenker/2016/08/26/containers-an-index/">this index post</a></p> Mapping Caps Lock to Esc Is Native to OSX Now http://blog.jasonmeridth.com/posts/mapping-caps-lock-to-esc-on-osx-is-native-now/ Jason Meridth’s Blog urn:uuid:3e681608-4600-4b1b-bb86-a1af3c06c95d Mon, 31 Oct 2016 20:12:00 +0000 I have been using Seil for a few years now on OSX to map Caps Lock to Esc. I use Vim for my development and letting my left pinky tap the Caps Lock key instead of Esc allows me to keep my hands on the home row and move much quicker. I also can’t remember the last time I actually needed the Caps Lock key. Well as of 10.12.1 (macOS Sierra Update) you can do this mapping in System Preferences. <p>I have been using <a href="https://pqrs.org/osx/karabiner/seil.html.en">Seil</a> for a few years now on OSX to map Caps Lock to Esc. I use Vim for my development and letting my left pinky tap the Caps Lock key instead of Esc allows me to keep my hands on the home row and move much quicker. I also can’t remember the last time I actually needed the Caps Lock key. Well as of 10.12.1 (macOS Sierra Update) you can do this mapping in System Preferences.</p> <p>Thank you to my co-worker <a href="https://twitter.com/kweerious">Dedi</a> for letting me know about this.</p> <p>Go to System Preferences from the Apple menu:</p> <p><img src="http://blog.jasonmeridth.com/assets/images/system_preferences.png" alt="system preferences" /></p> <p>Go to keyboard: <img src="http://blog.jasonmeridth.com/assets/images/keyboard.png" alt="Keyboard" /></p> <p>Go to “Modifier Keys” button on bottom right: <img src="http://blog.jasonmeridth.com/assets/images/modifier_keys.png" alt="Modifier Keys Button" /></p> <p>Change Caps Lock Key to Escape: <img src="http://blog.jasonmeridth.com/assets/images/caps_lock_to_esc.png" alt="Change Caps Lock Key to Esc" /></p> Details HTML Section In Github Issues and Gists http://blog.jasonmeridth.com/posts/details-html-section-in-github-issues-and-gists/ Jason Meridth’s Blog urn:uuid:9ea27af8-53d3-c1e8-700b-0c5ea000d051 Thu, 28 Jul 2016 16:01:00 +0000 I recently became aware of using the &lt;details&gt;&lt;/details&gt; and &lt;summary&gt;...&lt;/summary&gt; tags in Github issues and Gists. <p>I recently became aware of using the <code class="highlighter-rouge">&lt;details&gt;&lt;/details&gt;</code> and <code class="highlighter-rouge">&lt;summary&gt;...&lt;/summary&gt;</code> tags in Github issues and Gists.</p> <p><a href="https://github.com/jmeridth/jmeridth.github.io/issues/3">Here</a> is an example.</p> <p>I will definitely be using this more when posting big logs or stack traces.</p> <p>Kudos:</p> <ul> <li><a href="https://twitter.com/rhein_wein">Laura Frank</a> for showing me this feature</li> <li><a href="https://twitter.com/dahlbyk">Keith Dahlby</a> for letting me know it doesn’t currently work in Firefox</li> <li><a href="https://twitter.com/mhinze">Matt Hinze</a> for showing me <a href="https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d">this</a> gist where I learned of <code class="highlighter-rouge">&lt;summary&gt;...&lt;/summary&gt;</code></li> <li><a href="https://twitter.com/ericclemmons">Eric Clemmons</a> for the awesome <a href="https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d">gist</a> that does a great job explaining the feature</li> </ul> Stop and Remove All Docker Containers http://blog.jasonmeridth.com/posts/stop-remove-all-docker-containers/ Jason Meridth’s Blog urn:uuid:1130cd20-30b8-8352-2a1a-f77bb4a85fb0 Wed, 06 Jul 2016 21:44:00 +0000 Command remove all docker containers: <p>Command remove all docker containers:</p> <p><code class="highlighter-rouge">docker stop '$(docker ps -a -q)' &amp;&amp; docker rm '$(docker ps -a -q)'</code></p> <p><code class="highlighter-rouge">docker ps -a -q</code> lists all container IDs</p>