I am currently trying to implement a load balancer and hit a speed collision. worker_a The reason for this type of establishment is that there are unique requests for each worker who can only take action on that, but there are general requests that Any process Rhea. I was originally going to implement it using 3 instances. Each worker can access their local queue + shared queues that it is a part (e.g., worker_a qi_a and qaçais). The problem with this idea is that if there is a request in a local queue and a request in the shared queue with the same priority, it is impossible to know which one should be processed first (interval normal First-served on arrival for the first time). Edit: I've found that the interlalate not appear with any priority requests appear! I would like to reduce the locking in the queues because it will be a relatively high throughput and sensitive time, but at this time I can think that the only way would be to involve a lot of complexity where the third row has been removed and The requests are shared between both queue_a and queue_b. When the request is sucked out, it will be known that it is a shared request and it must be removed from the other rows. Hope it clearly tells enough! It seems that you will just press the bubble around - whether you have to adjust it, worst In the situation you will have three things the same as the priority of execution by two workers Here are two ideas: Choosing: How do the tie breaking criteria Can you apply for the queue to draw the next task beyond priority? Random at all the priorities are the same, so no matter who is chosen. In the worst case scenario, all queues will be serviced at the same rate. Queue length ranges from maximum range to minimum element html < / Html> This may be the reason for some of the other lines of starvation.
Comments
Post a Comment