C# Class Orleans.KafkaStreamProvider.KafkaQueue.TimedQueueCache.TimedQueueCache

The TimedQueueCache works similarly to the SimpleQueueCache but it also has a Timespan which is used as an expiration and retention time. I.e, only items that expire their Timespan (and were consumed by all cursors of course) are allowed to be removed from the cache. That way the cache always guarantees to hold all the items that were inserted in a certain Timespan (for example if the Timespan is 1 hour, all the messages that were inserted in the last hour will remain in the cache, with no regard if they were consumed or not). The TimedQueueCache also offers to hold a callback for when items are being removed from the cache and also allows to define an interval for how many items need to be removed before the callback is called.
Inheritance: IQueueCache
Afficher le fichier Open project: gigya/Orleans.KafkaStreamProvider Class Usage Examples

Private Properties

Свойство Type Description
Add void
CalculateMessagesToAdd void
FindNodeBySequenceToken LinkedListNode
FloorSequenceToken StreamSequenceToken
GetOrCreateBucket TimedQueueCacheBucket
GetTimestampForItem System.DateTime
InitializeCursor void
Log void
RemoveLastMessage IBatchContainer
RemoveMessagesFromCache List
ResetCursor void
SetCursor void
TryGetNextMessage bool
UpdateCursor void

Méthodes publiques

Méthode Description
AddToCache ( IList msgs ) : void
GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
GetMaxAddCount ( ) : int

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd

IsUnderPressure ( ) : bool
TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
TryPurgeFromCache ( IList &purgedItems ) : bool

Private Methods

Méthode Description
Add ( IBatchContainer batch, StreamSequenceToken sequenceToken ) : void
CalculateMessagesToAdd ( ) : void
FindNodeBySequenceToken ( StreamSequenceToken sequenceToken ) : LinkedListNode
FloorSequenceToken ( StreamSequenceToken token ) : StreamSequenceToken
GetOrCreateBucket ( ) : TimedQueueCacheBucket
GetTimestampForItem ( IBatchContainer batch ) : System.DateTime
InitializeCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken sequenceToken ) : void
Log ( Logger logger, string format ) : void
RemoveLastMessage ( ) : IBatchContainer
RemoveMessagesFromCache ( ) : List
ResetCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken token ) : void
SetCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void
TryGetNextMessage ( TimedQueueCacheCursor cursor, IBatchContainer &batch ) : bool

Acquires the next message in the cache at the provided cursor

UpdateCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void

Method Details

AddToCache() public méthode

public AddToCache ( IList msgs ) : void
msgs IList
Résultat void

GetCacheCursor() public méthode

public GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
streamIdentity IStreamIdentity
token StreamSequenceToken
Résultat IQueueCacheCursor

GetMaxAddCount() public méthode

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd
public GetMaxAddCount ( ) : int
Résultat int

IsUnderPressure() public méthode

public IsUnderPressure ( ) : bool
Résultat bool

TimedQueueCache() public méthode

public TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
queueId QueueId
cacheTimespan System.TimeSpan
cacheSize int
numOfBuckets int
logger Logger
Résultat System

TryPurgeFromCache() public méthode

public TryPurgeFromCache ( IList &purgedItems ) : bool
purgedItems IList
Résultat bool