Key to double buffer: Data exchange of double buffer queueġ) The producer thread is constantly written to the producer queue A. Second, the producer consumers - double bufferĪ public buffer area, due to the larger-chasing conflict of multi-threaded access, double buffer means to solve the lock conflict The double queue can take a lot, is it saved?)īlock queueCompared to the original producer consumers can also improve efficiency, the next one Consumers plus one lock, take one from the queue. (Comparison, it is originally the factory to produce a plus lock, one of the teams. At this time, I have to handle synchronization. Only in Listt is finished, I haven't given it, I will change it with Listp. This is not to reduce the thread synchronization of many times? At least, Listp is fully occupied by the factory class thread before they exchange, and the LISTT is completely occupied by the KID class thread, without processing synchronization. When LISTT turns an empty, put all the toys that are taken in this time in this time in Listt, Ok, after this, they will be their own: Listp Take it, LISTT will be sent. Is it a bit confused? There is a list of LISTP to get a toy object from the factory, and another LISTT is dedicated to the KID class dedicated to the KID class. Why does this reduce the call of the lock? Not paying attention to, goods, the program results are incorrect. We know that when multiple threads are connected and sent to the same resource, special attention should be paid to the synchronization problem of threads. The double buffer queue is the overhead of synchronous / mutual exclusion. How to better reduce the number of lock competition? The double buffer queue to introduce today is a good choice. This queue has a mutually exclusive and competition operation when sharing access, meaning that you have to lock each visit. But along with DCAS/CAS they also use SEH to catch paging-faults.In the producer-consumer mode, we often use the queue. However they still pack 25-bit counter along with pointer. They solve problem with paging-faults/memory-shooting in lock-free algos, and ABA problem fall away.ītw, Microsoft uses 64-bit CAS for their SList API, on 32-bit platforms it's a DCAS, and on 64-bit platform it's just CAS. IMVHO, the real solution is so called PDR (partial-copy-on-write deferred reclamatiom) techniques, which include SMR, RCU, VZOOM, PC, etc. What if consumer will free queue node? DCAS will just cause paging-fault.Īnd how I can organize efficient iteration over linked-list with DCAS? IMVHO, so called IBM-tagging (pointer+counter) is more of a hack, and hack only for part of the problem. If queue requires DCAS then it's also typically require TSM (type-stable memory). True, you can mangle a pointer with a small-ish counter and get some protection but the risk of undetected ABA increases as you shorten the number of bits in the counter. There is a well known situation called the ABA problem and the recommended solution is to use a DCAS on. Not all platforms support DCAS.Īs an example, you can safely add a node to the head of a singly linked list using a CAS, but you cannot safely remove a node from a singly linked list using a single CAS. Lock free queues typically require DCAS(double word compare and swap)for some portions of the code. Probably I can replace the micro_queue by Lock-Free queue to see the performance. Ido not know well aboutefficiency of this instruction. In micro_queue,TBB use "pause/yield" to avoid concurrently accessing. Having said that, personally I would use different algorithm for mpmc-queue in TBB, not necessary lock-free but just different (with stricten requirements for element types). This will also badly cooperate with arbitrary element types. Then, some lock-free queues make additional copies of elements and do not support exceptions thrown from ctor/copy ctor of element. TBB's queue supports arbitrary element types. Then, some lock-free queues require element to be a pointer (single-word POD). Have you measured performance/scalability of TBB's queue againts lock-free implementation? AFAIR, TBB's queue reduces contention on each micro-queue by a factor of 8 (probably have to be tunable parameter). In some conditions some lock-based algorithms are indeed faster than lock-free algorithms. Lock-freedom is not about performance and scalability. There are many Lock Free queues implemented withCAS(compare and swap).does TBB developer consider that?įirst of all, TBB devs need not ANY lock-free queue, they need fast lock-free queue.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |