Using the Priority Queue Pattern with our Microsoft Azure Solution

In a recent programming conundrum, I was trying to find a way I could promote certain customer queries to the top of the support list compared to that of a general nature. The limitations with the azure service bus is the inability to add priority headers to packets being placed into the queue, so that was a nonstarter.

With queues, they work on a FIFO (first in first out) nature and allow developers to decouple the components that are adding items to the queue from those that are performing tasks against it.

The queue in general was great at adding items to be processed in an asynchronous nature, however with recent updates to our SLA requirements we were encountering issues where non urgent request were creating a service bottleneck in our system.

The solution that we decided to try to implement was the priority queue on the Microsoft Azure using a few message queues and multiple consumer instances against those queues.

 

The plan was to identify the data being processed from the producer side, and based on that generate the relevant packets and use the message router to send the packets of data to the corresponding queue. In our example, we spawn 3 types that were being used (High, Medium and Low priorities).

In essence the queue would function as normal with the producer – consumer peeking and deleting messages as they are being added to the queue. The difference and where the priority queue pattern comes into play is the number of consumers being allocated to subscribed to the particular queues. With the high priority queue, we have 5 instances competing to consumer the messages. With the medium we had 3 and with the low we had one. The result of this is that the high priority queue would be able to handle many more requests and faster than the other queues, and therefore would be able to provide a far better SLA time and meet expectations.

 

priority-queue-separate

 

For more information on it you can read https://docs.microsoft.com/en-us/azure/architecture/patterns/priority-queue

You can also find the implementation example on Github

Headless CMS architecture pattern

In recent days I have come across a new term going around whilst investigating new alternatives for our current CMS architecture. CMS systems have been around for many years and whilst they have served their purpose in being able to add, update and deliver data to customers, they were designed primarily for desktop devices. With the advancement of mobile and other devices, it has become increasingly important that a cms solution be able to cope with this advancement and also account for scalability issues that one may face.

 

What does Headless CMS provide?

Headless CMS provides a back end only CMS system that is built with a content system and exposes a Restful API that is responsible for displaying data onto any device. With this approach it presents a presentation agnostic approach to delivering data and doesn’t tie any particular presentation to its backend.

 

What is in a traditional CMS?

  • Provides a method to store and maintain data.
  • Provides a CRUD UI
  • Provides presentation layer to display the data.

 

What is in a headless CMS?

  • Provides a method to store and maintain data.
  • Provides a CRUD UI
  • Provides Restful API to the data.

 

Why decouple CMS?

With traditional monolithic CMS systems the content management application and content delivery application exist together in a single application, they provide a solution for simple blog and basic websites where everything can be managed in one place.

With a decoupled CMS, we are able to separate the content management application from the content delivery application, this frees up developers to be able to choose the way they want to deliver content to users. It is important to understand that the creation of content is not the same as delivering and that the separation are clear.

In a decoupled CMS, it promotes the microservices architecture approach and you can leverage the use of event driven message queues in order to store state for content and updates to the website. By using the event driven approach when you delete a content element, then there is a call to the contentdelete event.

Event consumers will be responsible for consuming the changes produces by the event source that handles events and can be optimised through the API.

 

What would an architecture look like?

architecture

For a headless CMS system, we could deploy something based upon CQRS:-

CQRS stands for Command Query Responsibility Segregation. It's a pattern that I first heard described by Greg Young. At its heart is the notion that you can use a different model to update information than the model you use to read information.

From: Martin Fowler

 

Utilise event sourcing:-

Create an approach that is responsible for handling a sequence of events on as operations on data where each event is appended onto a read only basis to temp or permanent store. When a action takes place, the application sends a series of sequence command operations that once stored can later be replayed when executing the same series of operations on the set of data.

Heres a brief fact sheet of what it performed with event sourcing:-

  • Events are immutable
  • Task produced can run in the background
  • Improve performance and stability as no contention for processing of transactions
  • Events represent what events have occurred.
  • Append only nature to the data means that an audit trail is provided, can replay events at any time.
  • Decouple task from events and provides flexibility and extensibility.

There are however a few issues that also need to be considered:-

  • Some delay when adding events to the event store between the handler, publishing events and consumers handling the events.
  • If you need to undo a change to the data is to add a compensating event to the event store

 

You can read more about it here.

https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing

Microsoft Future Decoded London 2017

Today I was lucky enough to attend Microsofts future decoded event in London excel centre, as usual there was numerous talks that were of interest but as always, I had to diligently choose the ones that were of more interest to me.

For those who haven’t attended this Microsoft event before, this offers Microsofts insight into their vision of tomorrow, talks centre around what technologies and strategies Microsoft are investing in and that they would like us to pay attention to.

The keynote delivered essentially Microsofts sales pitch on their new surface, surface book and Microsoft hub devices and how they can leverage Microsoft collaborative web spaces to bring teams together from different locals. They gave a demonstration on the hub of how an individual in the audience is able to create story board sketches to the workspace, adding drawings and updating ordering of the drawings. At the same time, another team in the US is able to upload various photos to the same work space and rearrange them in any particular order.

This collaborative and seemingly lag free workspace sharing looks like an ideal solution to bring various teams together, and as I have found when working with various personnel spread over multiple locations this real-time update helps to not create bottlenecks in waiting for / delivering information to each other.

We also had a glimpse of HoloLens and Microsoft leveraging augmented reality and how it can be used in the business place. Panos Panay demonstrated how simply using the surface and the AR app, we are able to drop an augmented hub into the room, and side by side with a real hub, we can see how realistic it is and envision what it would actually look like in real life.