The stasis message bus is something that is used every day as of Asterisk 13 when you are using Asterisk. It provides the foundation of such things as CDR, CEL, ARI, AMI, and some operations in applications. It allows a message to be published and then handled by a subscriber. These messages are automatically created and published as things happen in the system. A channel being created, a channel getting new connected line information, a PJSIP endpoint becoming available. These all result in a message being created and published. Now that stasis has been in use for some time and we better understand the usage patterns we’ve begun working on improving the performance of it.
It is said when improving performance you can either do less or do things more intelligently. Both of these apply to stasis and are a good guide on how to approach improving it. To that end a few things are already up for review so let’s take a look! Do note though these are currently targeted for Asterisk 17, so you’ll have something to look forward to when it is released!
Stasis Channel Snapshot Caching
Snapshots in stasis are a point in time view of an object. The most used snapshot is for channels. When creating a message for stasis involving a channel you don’t want to include the channel itself as that can change and may have locking considerations so instead a read-only immutable snapshot of the channel is included instead. This ensures that the message contains a view of the channel when the message was created, not when the message was processed.
These snapshots are cached for retrieval and usage by anything in the system so they don’t need to go directly to a channel. They are also used to provide a consumer of the message the snapshot of the channel before the new one so they are aware of the change that happened. This reduces locking and can be useful. Unfortunately we previously used a generic mechanism for storing these snapshots which resulted in more work and more stasis messages. While generic mechanisms are usually a good thing in this case it was using more CPU and not really providing much.
A change has now gone up for review to change this and make the caching an implementation detail of channels themselves, as they best know how to cache the information. The benefit of this is that a snapshot is now also stored on the channel itself reducing lookups if we already have the channel and reducing snapshot creation if one does not need to be created. This follows the “do less” approach.
While this did result in an API change it actually made the usage of the channel snapshot cache simpler for consumers and reduced the size of the API itself. It is now also easier to follow the implementation for developers if they need to work on the code.
Stasis Subscription Filtering
When stasis was originally created there was no concept of filtering or that a subscriber receiving messages may only want a select few. A subscription received all messages that were published on the topic it was subscribed to. To keep the implementation simpler this worked fine, but it caused stasis to do more work than necessary. This is because when a message is published it is given to each subscriber which may result in the message being dispatched to another thread that wakes up, does work, and the work may be to simply return and do nothing.
A change has now gone up for review which adds the ability for a subscriber to specify what message types they are interested in. The publishing operation then filters the published messages so that a subscriber does not receive messages it is not interested in. This results in its thread not waking up at all and doing no work. Overall this results in doing less work for the same thing.
A particularly nice thing is that in the future this change could even be leveraged further and be made to be more intelligent, preventing the initial creation of the stasis message if no subscriber on a topic is interested in the message itself since stasis is now aware of what subscribers themselves are interested in. Stay tuned to see if this comes to fruition!
The Performance Impact
While it is early days the two changes outlined here have shown in preliminary testing to reduce the CPU usage by approximately 20% in calling scenarios. I look forward to seeing this reduction increase even further as more changes are done to improve stasis.