C# (CSharp) Disruptor Namespace

Nested Namespaces

Disruptor.Collections
Disruptor.ConsoleApp
Disruptor.Dsl
Disruptor.MemoryLayout
Disruptor.Perf
Disruptor.PerfTests
Disruptor.Test
Disruptor.Tests
Disruptor.Utils
Disruptor.WaitStrategy

Classes

Name Description
AlertException Used to alert IEventProcessors waiting at a ISequenceBarrier of status changes.
BlockingWaitStrategy Blocking strategy that uses a lock and condition variable for IEventProcessors waiting on a barrier. This strategy should be used when performance and low-latency are not as important as CPU resource.
BusySpinWaitStrategy Busy Spin strategy that uses a busy spin loop for IEventProcessors waiting on a barrier. This strategy will use CPU resource to avoid syscalls which can introduce latency jitter. It is best used when threads can be bound to specific CPU cores.
Entry Entries are the items exchanged via a RingBuffer.
FatalExceptionHandler
FixedSequenceGroup Hides a group of Sequences behind a single Sequence
IgnoreExceptionHandler Convenience implementation of an exception handler that using Console.WriteLine to log the exception
InitialCursorValue
InsufficientCapacityException

Exception thrown when the it is not possible to insert a value into the ring buffer without it wrapping the consuming sequenes. Used specifically when claiming with the {@link RingBuffer#tryNext()} call.

For efficiency this exception will not have a stack trace.

LiteTimeoutBlockingWaitStrategy Variation of the TimeoutBlockingWaitStrategy that attempts to elide conditional wake-ups when the lock is uncontended.
MultiProducerSequencer

Coordinator for claiming sequences for access to a data structure while tracking dependent Sequences. Suitable for use for sequencing across multiple publisher threads.

Note on Sequencer.Cursor: With this sequencer the cursor value is updated after the call to Sequencer.Next(), to determine the highest available sequence that can be read, then GetHighestPublishedSequence should be used.
MultiThreadedStrategy
NoOpConsumer
PhasedBackoffWaitStrategy

Phased wait strategy for waiting EventProcessor s on a barrier.

This strategy can be used when throughput and low-latency are not as important as CPU resource. Spins, then yields, then blocks on the configured BlockingStrategy.

PhasedBackoffWaitStrategy.LockBlockingStrategy
PhasedBackoffWaitStrategy.SleepBlockingStrategy
ProcessingSequenceBarrier ISequenceBarrier handed out for gating IEventProcessor on a cursor sequence and optional dependent IEventProcessors
RingBufferFields
Sequence Cache line padded sequence counter. Can be used across threads without worrying about false sharing if a located adjacent to another counter in memory.
Sequence.Fields
SequenceGroup A Sequence group that can dynamically have Sequences added and removed while being thread safe. The SequenceGroup.Value get and set methods are lock free and can be concurrently called with the SequenceGroup.Add and SequenceGroup.Remove.
SequenceGroups Provides static methods for managing a SequenceGroup object
SingleProducerSequencer

Coordinator for claiming sequences for access to a data structure while tracking dependent {@link Sequence}s.

Generally not safe for use from multiple threads as it does not implement any barriers.

SingleProducerSequencer.Fields
SingleProducerSequencer.Padding
SingleProducerSequencerFields
SingleProducerSequencerPad
SingleThreadedStrategy
SleepingWaitStrategy Sleeping strategy that uses a SpinWait while the IEventProcessors are waiting on a barrier. This strategy is a good compromise between performance and CPU resource. Latency spikes can occur after quiet periods.
SpinWaitWaitStrategy Spin strategy that uses a SpinWait for IEventProcessors waiting on a barrier.

This strategy is a good compromise between performance and CPU resource. Latency spikes can occur after quiet periods.

TimeoutBlockingWaitStrategy
TimeoutException
Util Set of common functions used by the Disruptor
YieldingWaitStrategy Yielding strategy that uses a Thread.yield() for EventProcessor s waiting on a barrier after an initially spinning. This strategy is a good compromise between performance and CPU resource without incurring significant latency spikes.