Recently I had an opportunity to try out implementing a Proof of Concept for our experiences to have some degree of resilience against transient errors. Our upstream APIs are built on Microsoft Web API, which are pretty reliable, the downstream legacy mainframe systems we rely on however, are extremely unstable. Throwing transient errors at all hours of the day, any day of the week. Some go into maintenance mode every night for up to two hours!
Surely we are not the only one having this plague of legacy systems, someone must have solved it, all I need is to google it, copy and paste, done. To my surprise, everywhere I looked, everyone is giving a solution that is too naive, almost borderline incorrect to be used in production. …
In the last episode we started off this idea of Rule Engine as a way to separate policy versus operation. We implement the Rule Engine using class and inheritance. Class offers a great deal of encapsulation through which we have this clear sense of boundary that wraps around a single purpose unit. We also enjoy a vast array of language features that comes with it, e.g. private state, behaviours, local functions, virtual methods, etc. Inheritance however, is one of the strongest couplings in object-oriented paradigm, if not careful, it could make our solution over-complicated, or worst, counter productive.
I always reason about inheritance as a way to express classification or taxonomy when the nature of problem domain bears the characteristics of specialisation and specification which warrants us to model it this way. Think about specialisation as a set of the same thing but each one expresses a slightly different flavour, whereas specification is about molding. Inheritance is suitable for modelling problems that are lack of behaviours but rich in states or side effects. Whereas composition is the other way around. …
Asynchronous is NOT about multiple background threads, but about making more efficient use of the current thread.
I use async-await in C# all day every day, I rarely pay close attention to the effect on performance, some subtle, some very noticeable, depending on how we use async-await. I never felt the need to question or verify what I learned on Microsoft documentation articles, which I think they did a fantastic job in recent years keeping those documentation up to date and high quality.
Until recently, in a conversation with a friend of mine, who by the way is a brilliant programmer, he shared some of the design ideas behind an API he built that is achieving a phenomenal performance result. Apart from caching heavily, the way he organises async tasks caught my attention: async tasks are fired off in one go, then await for them, as one task finishes then process result straight away, and another task finishes then process result straight away, etc, etc, basically eagerly process completed tasks while we wait for other tasks to complete. …
In Robert Martin’s book Clean Code he explains why if statements are considered harmful and suggests when there is an awful amount of if statements in our code, it usually indicates the underlying nature of the problem is polymorphic. I am not going to discuss why if statements considered harmful here, you can read an excellent article by Jetbrains about the problem if statements could bring if you are interested.
I do see the point as in why excessive amount of if statements are a terrible situation to be in. I don’t however, quite agree on that polymorphism underpins the problem at hand. Sometimes we could indeed refactor out of a sea of if statements with polymorphism — subtype or inclusion polymorphism to be specific. Other times I feel this is too heavy handed because some scenarios are just too simple to warrant subtype polymorphism which is one of the strongest couplings in OO and it is not hard to get wrong! …
Separate policy from mechanism
I personally don’t like to think it as a rule as such, rather I see it just like any other rules or principles, as a perspective from which we think and reason our code structure.
This rule looks a bit abstract, so let me give you an every day experience so you may understand and remember what it means.
Every time I try explaining this to my colleagues I use the example of the sliding doors at our work building so here we go:
At work we have a pair of sliding doors when we tap our access card to the sensor, they open. Now, imagine Jon Snow just joined the company, before Jon’s first day, the building manager goes: okay, there is a new guy Jon, we have to change the sliding door so his access card can work. …
Most articles or answers on the net only explain either the client or backend leg. Hardly any resources as far as I could search for, explains the complete end-to-end flow.
So I created a little experiment of an JS client with a C# web api backend to fully explore the possible approaches for file upload.
Source code demo is available here.
This article is based on react and ASP.NET web api 2 for PDFs upload. For other backends and file formats, the handling might be syntactically different but the basic idea remains the same.
First all, we need an html form. Inside that we have a what Reactjs refers to as an uncontrolled input because when we specify input type as ‘file’ react will always treat it as a raw html input. In order to get hold of this input during re-render, we need to give it a ref which is created in the constructor of react…
Problem: the try-catch insanity
Again, I admit I have done it, we all have done it. Another copy and paste job from some tutorial that written up in 2 minutes for the purpose of learning NOT fit for production code that is littered with try-catch.
Here is a contrived example:
As you can see, this is not very good for our mental healthy, let’s see if we can do it better.
Option 1: try-catch wrapper
In the case that we don’t want error to bubble up to root Saga, or we are interested in handling in a specific saga, we wrap the saga generator in a safe generator. …
We’ve all done it — copy from tutorial, stick it in production code and never look back! So this is what we end up:
Okay, let’s apply some good old Unix programming philosophy — Fold intelligence into data structure so your program can be simple and stupid.
Here is how:
We use a data literal called ‘which’, its keys are action type, properties are a lambda function returns a new state.
By doing this, we fold the correlation (knowledge)between action type and its handler into a data structure so our program can be as stupid as which[action.type].
Thanks for reading if you have any suggestions please PR to my Github, and stay tuned for next article about Dictionary Pattern.
Before delving into what the pattern is, it is important to know where the inspiration is drawn from — Unix Rule of Separation. If you haven’t read it, I strongly recommend you go through it.
Part two of the Rule Engine story is available now here. Although it is not necessary to read the first part, but it helps set the context as why we need to think about our Rule Engine in a different paradigm.
I work in Rest API space quite a bit, quite often I find myself constantly have to check a bunch of business requirements to deem the eligibility of an operation. For example, a set of criteria needs to be met for customer to be able to apply for a certain product, e.g. must be over 18 years old, must be a resident, must have a pre-approved credit limit, etc. Most people, at least from the code I have seen, will naturally relate the solution to if-else, which it is one way of thinking about it. It is okay if we have a small set of business rules to check. …