embedding LIGHTSWITCH IN ENTERPRISE SERVICES (PART 2/N): architecture for scaling out long running processes
In this very first very article on embedding LightSwitch in enterprise services, I will first focus on using a WCF service as a vehicle for encapsulating a “business engine” that could work seamlessly with the online processing of a LightSwitch application. I currently will leave in the middle for what this business engine could be used. We just focus on the architecture. Obviously, in later articles in this series I’ll provide hands on examples in visual studio. So currently, see a long running process for example as kind of side-effect processing on things happing in the request response pipeline in the world of the online transaction processing application. That could be simply sending out thousands of mails, making a calculation which depends on other services in your enterprise, orchestrating other services, you name it. ….
Encapsulating such complex or heavy processing in something like a “business engine” can make sense, apart from a performance perspective, also for being able to apply unit and integration testing.
What is the problem with invoking long running processes directly in LightSwitch?
I have seen various attempts of people trying to invoke a long running process directly inside the request response pipeline of the LightSwitch application by
- spinning up a background worker process or
- by using the “async pattern” in one way or another.
All this is, in my view, a genuine anti-pattern! Why? Well, it might work for processing that does not require access to the LightSwitch data (e.g. simply sending mails), but from the moment you need data access, following issues arise:
- How to hook up a ServerApplicationContext on other thread?
- where is the response handled?
So, although you might succeed in not blocking the request response pipeline of the LightSwitch server application, anyhow the overall performance of the LightSwitch server application is impacted.
My view on this is simple: the request-response pipeline of the LightSwitch application (both odata and web-api) should remain lean, light and responsive.
That’s why we need something better for handling long running processes.
Let’s introduce WCF
The top of mind solution for such a business engine for running long running processes is WCF.
Ok, great ! But, we adopted LightSwitch because we wanted to get rid of these hard core .Net artifacts, so there is a huge risk of getting very fast involved in a highly complex architecture. That’s definitely a path that I want to avoid !
WCF allows to very elegantly dispatch a long running process to another service in a completely “fire and forget” manner, so by means of a “one way” service call. That’s something which is not possible with e.g. a REST based web api call. Furthermore, WCF allows also to introduces the concept of message queues. I’ll elaborate more on this in later articles.
Another level of complexity that such a WCF service can introduce is data access. What will we do when the service operation itself needs access to the LightSwitch domain model? Will we setup an own Entity Framework framework, own repository pattern, etc. … for this?
Well, I can imagine that in certain conditions we have no other option, but I tend to use the “LightSwitch goodness” as far as possible.
The overall architecture
Let’s first look to the architecture of a a normal LightSwitch application:
Pretty simple, the silverlight or html5 client communicates for it’s normal REST based handling over an odata pipeline, potentially complemented with a web-api pipeline handling “commands” for more RPC (remote procedure call) style calls.
Let’s introduce now our WCF service encapsulating the business engine:
Ok, don’t get intimidated by this more complex picture.
Obviously, the Silverlight or Html5 client still communicates with the LightSwitch server app running on IIS. It’s called “LightSwitch Server App Copy1”. The reason why it’s called copy 1 will become clear in a minutes or two.
In the event that during the request-response pipeline a long running process needs to be invoked, we call the WCF service. This WCF service might orchestrate other services, make a complex calculation, whatever.
But, from the moment the WCF service needs access to the domain model logic of the application we make a web api call to another instance of the LightSwitch application, which is an exact copy of LightSwitch Server App copy 1.
So, LightSwitch Server App copy 1 is our normal LightSwitch app and serves the online transaction processing.
LightSwitch server application Copy 2, will never be used for online transaction processing, but is only meant in the scenario where the WCF Service needs access to the LightSwitch domain model.
This is really the base for scaling out long running processes.
Anyhow, deploying a second instance of the LightSwitch app is very simple: one additional press on the publish button (after changing the name of the app). You could decide to deploy LightSwitch server app 2 in “service only” mode. Also deploying a WCF service is a piece of cake.
The WCF protocols in the above picture are both of the type “fire and forget” and the more regular “request response”. Fire and forget is that one we want to use for long running processes, where the request response pipeline doesn’t need an immediate “answer” from the service. For, let’s call it, “inline calculations”, where the result of the calculation matters for the LightSwitch response towards the html5/SL client, we will use the classic request-response WCF service calls.
Can we still debug all this in visual studio in an end-to-end way?
Indeed, we can still have a very nice debug environment experience (in visual studio). The only thing that matters is adding the WCF service to the LightSwitch solution. Obviously, what will not work in debug mode, is calling the second LightSwitch instance. But that’s not a big deal, the WCF service will simply call back to the running LightSwitch instance in debug. As a result, we’ll miss the scalability gain in debug, but we can debug the whole thing in an end-to-end manner and that’s what really matters.
Depending on which WCF Bindings we want to use, we’ll need to host the WCF in debug either in IIS Express (which is used for the LightSwitch app) or in the local full IIS.
Full IIS will be needed when we’ll used binding using message queues or in the scenario where we want to use “netTcpBinding” as WCF binding protocol. Later on that more.
All this seems to generate a lot of additional “traffic”?
Not at all. Unless you deploy the service to another server, everything happens in “localhost” mode. If you are worried about speed, use the netTcpBinding, which is almost as fast as an in process call.
Is LightSwitch intended for this?
Up to you to judge, but we are not at all doing “exotic” things. The only “special thing” we use here is the the ServerApplicationContext, when the WCF service calls the LightSwitch app. But, that’s really mainstream functionality in the mean time.
I like the approach because it allows us to use the LightSwitch application exactly for what it is meant for: doing online transaction processing. So, in my view, the WCF approach simplifies things, at least when you have apart from the online processing also service processes.
What are the infrastructural options?
We can run the 2 instances of the LightSwitch application and the WCF Service on one IIS Server. But what we definitely need to do is to give the all different application pools. By doing so, we can potentially give lower priority to LightSwitch app 2, in such a way that long running processes will not have a performance impact on the online transaction processing. As you might know, IIS (internet information services) is a complex and powerful piece of software. It’s really specialized in separating different processes in completely isolated sandboxes with advanced memory management capabilities.
Another scalability option is of course, to run the WCF service and the second LightSwitch instance on a dedicated IIS server. Then, you will notice what real scalability means.
Indeed, our service logic is spread between the WCF Service and the second LightSwitch instance
Well, only in the event when the service needs access to the LightSwitch domain model.
But the fact that things are spread, is not that bad at all, when you think about (unit) testability. Unit tests for the WCF service could simply mock out the data access logic towards the LightSwitch app.
Of course, not all WCF service operations will need data access. Even more, some service operations might need data access, but without using the LightSwitch server. For example, for more administrative tasks it might be better to use Entity framework and connect directly to the database from the WCF Service, or use stored procedures.
In a next article, I’ll first start with setting up a WCF in a robust way in our LightSwitch solution. We’ll skip for the first articles both security and transaction management. We’ll make baby steps. The precise content is not yet clear for me neither.
Apparently, this is article number 100 on this web site. Cheers !
More in a few weeks. I’ll take some holiday now.