C# Class Orleans.KafkaStreamProvider.KafkaQueue.TimedQueueCache.TimedQueueCache

The TimedQueueCache works similarly to the SimpleQueueCache but it also has a Timespan which is used as an expiration and retention time. I.e, only items that expire their Timespan (and were consumed by all cursors of course) are allowed to be removed from the cache. That way the cache always guarantees to hold all the items that were inserted in a certain Timespan (for example if the Timespan is 1 hour, all the messages that were inserted in the last hour will remain in the cache, with no regard if they were consumed or not). The TimedQueueCache also offers to hold a callback for when items are being removed from the cache and also allows to define an interval for how many items need to be removed before the callback is called.
Inheritance: IQueueCache
ファイルを表示 Open project: gigya/Orleans.KafkaStreamProvider Class Usage Examples

Private Properties

Property Type Description
Add void
CalculateMessagesToAdd void
FindNodeBySequenceToken LinkedListNode
FloorSequenceToken StreamSequenceToken
GetOrCreateBucket TimedQueueCacheBucket
GetTimestampForItem System.DateTime
InitializeCursor void
Log void
RemoveLastMessage IBatchContainer
RemoveMessagesFromCache List
ResetCursor void
SetCursor void
TryGetNextMessage bool
UpdateCursor void

Public Methods

Method Description
AddToCache ( IList msgs ) : void
GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
GetMaxAddCount ( ) : int

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd

IsUnderPressure ( ) : bool
TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
TryPurgeFromCache ( IList &purgedItems ) : bool

Private Methods

Method Description
Add ( IBatchContainer batch, StreamSequenceToken sequenceToken ) : void
CalculateMessagesToAdd ( ) : void
FindNodeBySequenceToken ( StreamSequenceToken sequenceToken ) : LinkedListNode
FloorSequenceToken ( StreamSequenceToken token ) : StreamSequenceToken
GetOrCreateBucket ( ) : TimedQueueCacheBucket
GetTimestampForItem ( IBatchContainer batch ) : System.DateTime
InitializeCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken sequenceToken ) : void
Log ( Logger logger, string format ) : void
RemoveLastMessage ( ) : IBatchContainer
RemoveMessagesFromCache ( ) : List
ResetCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken token ) : void
SetCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void
TryGetNextMessage ( TimedQueueCacheCursor cursor, IBatchContainer &batch ) : bool

Acquires the next message in the cache at the provided cursor

UpdateCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void

Method Details

AddToCache() public method

public AddToCache ( IList msgs ) : void
msgs IList
return void

GetCacheCursor() public method

public GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
streamIdentity IStreamIdentity
token StreamSequenceToken
return IQueueCacheCursor

GetMaxAddCount() public method

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd
public GetMaxAddCount ( ) : int
return int

IsUnderPressure() public method

public IsUnderPressure ( ) : bool
return bool

TimedQueueCache() public method

public TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
queueId QueueId
cacheTimespan System.TimeSpan
cacheSize int
numOfBuckets int
logger Logger
return System

TryPurgeFromCache() public method

public TryPurgeFromCache ( IList &purgedItems ) : bool
purgedItems IList
return bool