C# 클래스 Orleans.KafkaStreamProvider.KafkaQueue.TimedQueueCache.TimedQueueCache

The TimedQueueCache works similarly to the SimpleQueueCache but it also has a Timespan which is used as an expiration and retention time. I.e, only items that expire their Timespan (and were consumed by all cursors of course) are allowed to be removed from the cache. That way the cache always guarantees to hold all the items that were inserted in a certain Timespan (for example if the Timespan is 1 hour, all the messages that were inserted in the last hour will remain in the cache, with no regard if they were consumed or not). The TimedQueueCache also offers to hold a callback for when items are being removed from the cache and also allows to define an interval for how many items need to be removed before the callback is called.
상속: IQueueCache
파일 보기 프로젝트 열기: gigya/Orleans.KafkaStreamProvider 1 사용 예제들

Private Properties

프로퍼티 타입 설명
Add void
CalculateMessagesToAdd void
FindNodeBySequenceToken LinkedListNode
FloorSequenceToken StreamSequenceToken
GetOrCreateBucket TimedQueueCacheBucket
GetTimestampForItem System.DateTime
InitializeCursor void
Log void
RemoveLastMessage IBatchContainer
RemoveMessagesFromCache List
ResetCursor void
SetCursor void
TryGetNextMessage bool
UpdateCursor void

공개 메소드들

메소드 설명
AddToCache ( IList msgs ) : void
GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
GetMaxAddCount ( ) : int

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd

IsUnderPressure ( ) : bool
TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
TryPurgeFromCache ( IList &purgedItems ) : bool

비공개 메소드들

메소드 설명
Add ( IBatchContainer batch, StreamSequenceToken sequenceToken ) : void
CalculateMessagesToAdd ( ) : void
FindNodeBySequenceToken ( StreamSequenceToken sequenceToken ) : LinkedListNode
FloorSequenceToken ( StreamSequenceToken token ) : StreamSequenceToken
GetOrCreateBucket ( ) : TimedQueueCacheBucket
GetTimestampForItem ( IBatchContainer batch ) : System.DateTime
InitializeCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken sequenceToken ) : void
Log ( Logger logger, string format ) : void
RemoveLastMessage ( ) : IBatchContainer
RemoveMessagesFromCache ( ) : List
ResetCursor ( TimedQueueCacheCursor cursor, StreamSequenceToken token ) : void
SetCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void
TryGetNextMessage ( TimedQueueCacheCursor cursor, IBatchContainer &batch ) : bool

Acquires the next message in the cache at the provided cursor

UpdateCursor ( TimedQueueCacheCursor cursor, LinkedListNode item ) : void

메소드 상세

AddToCache() 공개 메소드

public AddToCache ( IList msgs ) : void
msgs IList
리턴 void

GetCacheCursor() 공개 메소드

public GetCacheCursor ( IStreamIdentity streamIdentity, StreamSequenceToken token ) : IQueueCacheCursor
streamIdentity IStreamIdentity
token StreamSequenceToken
리턴 IQueueCacheCursor

GetMaxAddCount() 공개 메소드

Because our bucket sizes our inconsistent (they are also dependant to time), we need to make sure that the cache doesn't take more messages than it can. see the function CalculateMessagesToAdd
public GetMaxAddCount ( ) : int
리턴 int

IsUnderPressure() 공개 메소드

public IsUnderPressure ( ) : bool
리턴 bool

TimedQueueCache() 공개 메소드

public TimedQueueCache ( QueueId queueId, System.TimeSpan cacheTimespan, int cacheSize, int numOfBuckets, Logger logger ) : System
queueId QueueId
cacheTimespan System.TimeSpan
cacheSize int
numOfBuckets int
logger Logger
리턴 System

TryPurgeFromCache() 공개 메소드

public TryPurgeFromCache ( IList &purgedItems ) : bool
purgedItems IList
리턴 bool