C# Класс Orleans.KafkaStreamProvider.KafkaQueue.TimedQueueCache.TimedQueueCache

The TimedQueueCache works similarly to the SimpleQueueCache but it also has a Timespan which is used as an expiration and retention time. I.e, only items that expire their Timespan (and were consumed by all cursors of course) are allowed to be removed from the cache. That way the cache always guarantees to hold all the items that were inserted in a certain Timespan (for example if the Timespan is 1 hour, all the messages that were inserted in the last hour will remain in the cache, with no regard if they were consumed or not). The TimedQueueCache also offers to hold a callback for when items are being removed from the cache and also allows to define an interval for how many items need to be removed before the callback is called.
Наследование: IQueueCache
Показать файл Открыть проект Примеры использования класса

Private Properties

Свойство Тип Описание
Add void
CalculateMessagesToAdd void
FindNodeBySequenceToken LinkedListNode
FloorSequenceToken StreamSequenceToken
GetOrCreateBucket TimedQueueCacheBucket
GetTimestampForItem System.DateTime
InitializeCursor void
Log void
RemoveLastMessage IBatchContainer
RemoveMessagesFromCache List
ResetCursor void
SetCursor void
TryGetNextMessage bool
UpdateCursor void

Открытые методы

Метод Описание
AddToCache ( IList msgs ) : void
GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
GetMaxAddCount ( ) : int

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd

IsUnderPressure ( ) : bool
TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
TryPurgeFromCache ( IList &purgedItems ) : bool

Приватные методы

Метод Описание
Add ( IBatchContainer batch, StreamSequenceToken sequenceToken ) : void
CalculateMessagesToAdd ( ) : void
FindNodeBySequenceToken ( StreamSequenceToken sequenceToken ) : LinkedListNode
FloorSequenceToken ( StreamSequenceToken token ) : StreamSequenceToken
GetOrCreateBucket ( ) : TimedQueueCacheBucket
GetTimestampForItem ( IBatchContainer batch ) : System.DateTime
InitializeCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken sequenceToken ) : void
Log ( Logger logger, string format ) : void
RemoveLastMessage ( ) : IBatchContainer
RemoveMessagesFromCache ( ) : List
ResetCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken token ) : void
SetCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void
TryGetNextMessage ( TimedQueueCacheCursor cursor, IBatchContainer &batch ) : bool

Acquires the next message in the cache at the provided cursor

UpdateCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void

Описание методов

AddToCache() публичный Метод

public AddToCache ( IList msgs ) : void
msgs IList
Результат void

GetCacheCursor() публичный Метод

public GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
streamIdentity IStreamIdentity
token StreamSequenceToken
Результат IQueueCacheCursor

GetMaxAddCount() публичный Метод

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd
public GetMaxAddCount ( ) : int
Результат int

IsUnderPressure() публичный Метод

public IsUnderPressure ( ) : bool
Результат bool

TimedQueueCache() публичный Метод

public TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
queueId QueueId
cacheTimespan System.TimeSpan
cacheSize int
numOfBuckets int
logger Logger
Результат System

TryPurgeFromCache() публичный Метод

public TryPurgeFromCache ( IList &purgedItems ) : bool
purgedItems IList
Результат bool