Introduction to Message Queues

The basic architecture of a message queue is simple, there are client applications called producers that create messages and deliver them to the message queue. An application, called consumer, connects to the queue and retrieves the messages to be processed. Messages are stored in the queue until the consumer retrieves them for processing.queueIn this post I will share some the benefits you can gain from utilizing message queues in your application architecture and look at some of the queuing platforms available today.  In the following series of posts I will explore how message queues can be utilized with Sitecore:

Benefits of Message Queues

Here are some of the main features and benefits message queues provide to your application architecture:

Decoupling – between services can be achieved by simply adding a layer of abstraction, like a message queue, between the content producer and the content consumer. The Message queues remove dependencies between components

Persistence – if application encounters an issue while it transmitting data and your data is not persisted, it is gone for forever. Queues mitigate this by persisting data until it has been fully processed. Most message queues require a process to explicitly indicate that it has finished processing a message before the message is removed from the queue, ensuring your data is kept safe until processing is complete.

Resilient – queues provide a level of resilience if a component in your system fails, it does not impact the entire system. Message queues decouple the various components of your system. For example if a process that is reads messages from the queue for processing fails, messages can still be added to the queue to be processed when the component that failed recovers the messages on the queue will get processed.

Batch Processing – queues are very useful when we want to process batches of operations.  Batching helps us optimize their performance by tuning the size of the transactions.

Asynchronous – queues enable asynchronous processing, which allows you to put a message on the queue without processing it immediately.

Ordering – queues are capable of providing guarantees that data will be processed in a specific order (FIFO).

Delivery Guarantees – queues can ensure that messages are processed at least once and only once.

Scalability – because message queues decouple your processes, it’s easy to scale up the send or receive rate of messages – simply add another process.

Message Queue Solutions

Lets take a quick look at a few popular Message Queuing platforms that are available.

RabbitMQ

If you have stood an instance of Coveo on premise you would have come across RabbitMQ. Coveo uses RabbitMQ to store items it receives from Sitecore in message queues. Sitecore items get pushed to those queues by the Coveo Search Provider, and the Queue crawler in turn retrieves those items and indexes them in the Coveo index.

RabbitMQ is a high availability messaging framework which implements the Advanced Message Queue Protocol(AMQP). AMQP is an open standard wire level protocol similar to HTTP. It is also independent of any particular vendor. Here are some key concepts of AMQP:

  • Message broker: the messaging server which applications connect to.
  • Exchange: there will be a number of exchanges on the broker which are message routers. A client submits a message to an exchange which will be routed to one or more queues.
  • Queue: a store for messages which normally implements the first-in-first-out pattern.
  • Binding: a rule that connects the exchange to a queue. The rule also determines which queue the message will be routed to.

RabbitMQ is based on Erlang. There are client libraries for a number of frameworks such as .NET, and tools for developing with RabbitMQ.

With RabbitMQ‘s extensive configuration options, there are several methods for creating a highly available RabbitMQ service. A clustering option allows multiple RabbitMQ servers to operate as a single logical server, while federation and shoveling provide ways to accept and forward messages to other servers. Clusters can also be federating themselves to create an even more resilient platform that can more closely emulate AWS SQS high availability, but it comes at the cost of more servers and moving parts to be configured and maintained.

There are obvious overhead in the fact that you must host your own instances of RabbitMQ along with the infrastructure. Although you can also utilise a service like CloudAMQP which automates every part of setup, running and scaling of RabbitMQ clusters.

More info

Amazon SQS

AWS Simple Queue Service (SQS) is a fast, reliable, scalable, fully managed message queuing service. SQS is a highly available and distributed service which makes it reliable. Messages are replicated inside AWS, making message loss due to node failure virtually non-existent. SQS has a fairly straightforward method for publishing messages: create a queue and then publish messages to it. It’s that simple.

I’ve had a really positive experience utilizing SQS queues in the past and have to say it provides a very quick and easy way to setup simple queuing solution for your application. The AWS SDK for .NET enables you to easily work with Amazon Web Services to build solutions.

Authentication mechanisms are provided to ensure that messages stored in Amazon SQS are secured against unauthorized access. You can control who can send messages to a queue, and who can receive messages from a queue.

AWS SQS provides two types of queues: Standard queues and FIFO queues.

Standard Queues

Standard Queues Are the default queues providing high throughput with nearly unlimited transactions per second. Standard queues attempt to support at least once delivery, however occasionally more than one copy of a message is delivered. Standard queues provide best-effort ordering to ensure messages are sent in the order they are received. Occasionally messages might be delivered in a different order from what they were received.

You should consider using Standard queues when:

  • Throughput is important.
  • You application can process message that arrive more than once.
  • Your application is not concerned about the order messages arrive.

FIFO Queues

FIFO Queues message order is preserved so the order in which messages are sent is the order they were received. A message is delivered once and is available until a consumer deletes it. Duplicates cannot be introduced. There is a limitation of 300 transactions per second. Also FIFO queues are not available in all regions.

You should consider using FIFO queues when:

  • Ordering of the events/messages are critical
  • When you application cannot tolerate duplicates.
  • You don’t need or expect a high throughput of messages

More Info:

Azure Queues

Azure also supports two implementations of message queuing Storage Queue and Service Bus Queues. Each has a slightly different feature set, depending on your needs you can choose one or the other, or use both.

Storage Queue

Storage queues were introduced first, as a dedicated queue storage mechanism built on top of Azure Storage services. They feature a simple REST-based GET/PUT/PEEK interface, providing reliable, persistent messaging within and between services.

You should consider using Storage queues when:

  • Your application must store over 80 GB of messages in a queue.
  • Your application wants to track progress for processing a message inside of the queue. This is useful if the worker processing a message crashes. A subsequent worker can then use that information to continue from where the prior worker left off.
  • You require server side logs of all of the transactions executed against your queues.

Service Bus queues

Service Bus queues are built on top of the broader Azure messaging infrastructure designed to integrate applications or application components. They support queuing as well as publish/subscribe, and more advanced integration patterns.

You should consider using Service Bus queues when:

  • Your solution must be able to receive messages without having to poll the queue. With Service Bus, this can be achieved through the use of the long-polling receive operation using the TCP-based protocols that Service Bus supports.
  • Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery.
  • Your solution must be able to support automatic duplicate detection.
  • You want your application to process messages as parallel long-running streams (messages are associated with a stream using the SessionId property on the message). In this model, each node in the consuming application competes for streams, as opposed to messages. When a stream is given to a consuming node, the node can examine the state of the application stream state using transactions.
  • Your solution requires transactional behavior and atomicity when sending or receiving multiple messages from a queue.
  • Your application handles messages that can exceed 64 KB but will not likely approach the 256 KB limit.
  • You deal with a requirement to provide a role-based access model to the queues, and different rights/permissions for senders and receivers.
  • Your queue size will not grow larger than 80 GB.
  • You want to use the AMQP 1.0 standards-based messaging protocol.
  • You can envision an eventual migration from queue-based point-to-point communication to a message exchange pattern that enables seamless integration of additional receivers (subscribers), each of which receives independent copies of either some or all messages sent to the queue. The latter refers to the publish/subscribe capability natively provided by Service Bus.
  • Your messaging solution must be able to support the “At-Most-Once” delivery guarantee without the need for you to build the additional infrastructure components.
  • You would like to be able to publish and consume batches of messages.

More Info

A few things you should consider

When looking at a potential messaging solution there some things you need to consider.

Retention Period – usually a message will enter a queue and is processed within a relatively short period of time. After a default period of time that message if it has not been processed it will be automatically deleted. How long will the message be retained before they are deleted.

No of Queues – most systems support multiples queues. I recommend you have a unique queue for each message type. Allowing you to configure that queue based on it contents and the applications that need to process that message type. How many environments do you anticipate? As you’ll probably need separate queues for each environment ensuring cross contamination of messages does not occur.

Failed Messages – failed or poisoned messages are messages that are not processed successfully and after the number of retries has been reach. How are these failed messages going to be handled or do you even care. Are they automatically deleted or is there a mechanism in place to move them to a different form of storage. It depends on the importance of the message. If for example these message contain important information and they must be processed then you are going to want to ensure they are not lost and proper mechanism is in place to ensure these message are retained until they can This could be in the form of another queue like a dead letter queue. If your queuing system does not support this OOTB then you might need to implement something to handle this scenario.

What happens to messages that cannot be delivered? Most messaging systems provide a solution for example Amazon’s SQS provides a Dead Letter Queue. When a consumer fetches a message from a queue, the message will remain on the queue while it is being processed once it is processed successfully the same consumer deletes the message from the queue. If there is an error in processing the message then the messages read count is incremented after a configured read limit the message is moved to the Dead Letter Queue.

Message Locking – this is the mechanism in place that prevents a message from being processed multiple times. Once a message is read it is locked until that message has been processed and deleted from the queue. While that message is being processed the message is locked so it is invisible to other system ensuring it is only processed once. This lock is usually automatic and exists in place for a default period of time before it is released. Depending on how long your system takes to process you might need to change the default visibility timeout.

Monitoring – are there mechanisms in place to allow you to easily monitor and even peak inside the queues. How easy are these tools to use and who will have access or will be able to monitoring the queues using these tools.

Metrics – what kind of metric reporting is available: Number Of messages received, Message Throughput, Access to actual message queue content etc.

Message Retry – can you configured how many times can a message is retried before it is considered a failed message.

Message Attributes – some queue application support additional attributes on a message as well as the message body. Depending on your implementation these attributes can contain extremely useful information or allow you to include additional information in a message without touching the actual message body. What information might be useful as a attribute you might ask well it could be number of failed attempts or that date the message was first added to the queue or exceptions raised from of previous failed attempts. If attributes aren’t supported and this information is important or useful then you need to consider them as part of your message body.

Limitations – what kind of limitations are imposed if any? This could be the size of message or the number of messages that can be added to a queue for processed during a given period.

That’s all in the next post I will look at adding messages to a queue and retrieving messages from a queue with Sitecore.

2 thoughts on “Introduction to Message Queues

Leave a comment