Using the Priority Queue Pattern with our Microsoft Azure Solution

In a recent programming conundrum, I was trying to find a way I could promote certain customer queries to the top of the support list compared to that of a general nature. The limitations with the azure service bus is the inability to add priority headers to packets being placed into the queue, so that was a nonstarter.

With queues, they work on a FIFO (first in first out) nature and allow developers to decouple the components that are adding items to the queue from those that are performing tasks against it.

The queue in general was great at adding items to be processed in an asynchronous nature, however with recent updates to our SLA requirements we were encountering issues where non urgent request were creating a service bottleneck in our system.

The solution that we decided to try to implement was the priority queue on the Microsoft Azure using a few message queues and multiple consumer instances against those queues.

 

The plan was to identify the data being processed from the producer side, and based on that generate the relevant packets and use the message router to send the packets of data to the corresponding queue. In our example, we spawn 3 types that were being used (High, Medium and Low priorities).

In essence the queue would function as normal with the producer – consumer peeking and deleting messages as they are being added to the queue. The difference and where the priority queue pattern comes into play is the number of consumers being allocated to subscribed to the particular queues. With the high priority queue, we have 5 instances competing to consumer the messages. With the medium we had 3 and with the low we had one. The result of this is that the high priority queue would be able to handle many more requests and faster than the other queues, and therefore would be able to provide a far better SLA time and meet expectations.

 

priority-queue-separate

 

For more information on it you can read https://docs.microsoft.com/en-us/azure/architecture/patterns/priority-queue

You can also find the implementation example on Github

Headless CMS architecture pattern

In recent days I have come across a new term going around whilst investigating new alternatives for our current CMS architecture. CMS systems have been around for many years and whilst they have served their purpose in being able to add, update and deliver data to customers, they were designed primarily for desktop devices. With the advancement of mobile and other devices, it has become increasingly important that a cms solution be able to cope with this advancement and also account for scalability issues that one may face.

 

What does Headless CMS provide?

Headless CMS provides a back end only CMS system that is built with a content system and exposes a Restful API that is responsible for displaying data onto any device. With this approach it presents a presentation agnostic approach to delivering data and doesn’t tie any particular presentation to its backend.

 

What is in a traditional CMS?

  • Provides a method to store and maintain data.
  • Provides a CRUD UI
  • Provides presentation layer to display the data.

 

What is in a headless CMS?

  • Provides a method to store and maintain data.
  • Provides a CRUD UI
  • Provides Restful API to the data.

 

Why decouple CMS?

With traditional monolithic CMS systems the content management application and content delivery application exist together in a single application, they provide a solution for simple blog and basic websites where everything can be managed in one place.

With a decoupled CMS, we are able to separate the content management application from the content delivery application, this frees up developers to be able to choose the way they want to deliver content to users. It is important to understand that the creation of content is not the same as delivering and that the separation are clear.

In a decoupled CMS, it promotes the microservices architecture approach and you can leverage the use of event driven message queues in order to store state for content and updates to the website. By using the event driven approach when you delete a content element, then there is a call to the contentdelete event.

Event consumers will be responsible for consuming the changes produces by the event source that handles events and can be optimised through the API.

 

What would an architecture look like?

architecture

For a headless CMS system, we could deploy something based upon CQRS:-

CQRS stands for Command Query Responsibility Segregation. It's a pattern that I first heard described by Greg Young. At its heart is the notion that you can use a different model to update information than the model you use to read information.

From: Martin Fowler

 

Utilise event sourcing:-

Create an approach that is responsible for handling a sequence of events on as operations on data where each event is appended onto a read only basis to temp or permanent store. When a action takes place, the application sends a series of sequence command operations that once stored can later be replayed when executing the same series of operations on the set of data.

Heres a brief fact sheet of what it performed with event sourcing:-

  • Events are immutable
  • Task produced can run in the background
  • Improve performance and stability as no contention for processing of transactions
  • Events represent what events have occurred.
  • Append only nature to the data means that an audit trail is provided, can replay events at any time.
  • Decouple task from events and provides flexibility and extensibility.

There are however a few issues that also need to be considered:-

  • Some delay when adding events to the event store between the handler, publishing events and consumers handling the events.
  • If you need to undo a change to the data is to add a compensating event to the event store

 

You can read more about it here.

https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing

Eliding Async and Await

Being able to elide Async and Await can provide application efficiency benefits that at first may seem minimal but in the big picture will result in the compiler being able to skip the generation of the async state machine and result in fewer compiler generated types in the assembly as well as fewer items added to the GC and CPU computations.

When a async state machine is generated for a single await, you can effectively get a few extra IL instructions created as a result. In my opinion it is a benefit to elide await where possible but in some cases it isnt such as using statements.

Async does have great benefits such as performing code executions in parallel requests but it does however perform in a much more inefficient way than using synchronous code. Adding more awaits and async’s to an application ends up multiplying the cost at each point.

You can read more about it here:-

https://blog.stephencleary.com/2016/12/eliding-async-await.html