Re[2]: slowLockMutex / putHeavyLock

Godmar Back kaffe@rufus.w3.org
Thu, 30 Nov 2000 18:54:40 -0700 (MST)


> /*
>  * Lock a mutex - try to do this quickly but if we failed because
>  * we can't determine if this is a multiple entry lock or we've got
>  * contention then fall back on a slow lock.
>  */
> void
> _lockMutex(iLock** lkp, void* where)
> {
>         uintp val;
> 
>         val = (uintp)*lkp;
> 
>         if (val == 0) {
>                 if (!COMPARE_AND_EXCHANGE(lkp, 0, (iLock*)where)) {
>                         slowLockMutex(lkp, where);
>                 }
>         }
>         else if (val - (uintp)where > 1024) {
>                 /* XXX count this in the stats area */
>                 slowLockMutex(lkp, where);
>         }
> }
> 
> as you see - lkp is not protected at all.
> If val != 0, this doesn't mean that in next
> statement it's != 0, i.e. in this time another
> thread may execute some code to change lkp.

If *lkp becomes null, that means some other thread
unlocked the mutex since - we can still safely call slowLockMutex.
(Worst thing that happens will be that we'll put a heavy lock in 
place where we don't need one - a small optimization in slowLockMutex
avoids that ...)

The only situation in which we rely on the value read from *lkp is when 
it's != 0 && *lkp - where <= 1024.  However, in this situation
we're holding the lock already so no other thread can write *lkp.
This is a recursive enter.

There are two undocumented assumptions here: namely that
LOCK_INPROGRESS - where > 1024, because *lkp can be LOCK_INPROGRESS,
and that the stack grows down.  This is all perfectly 64bit safe and
architecture-independent.

> Actually, all code related to lkp must be under
> jthread_spinon/jthread_spinoff protection, right?
> 

I don't believe so.
That would defeat the whole purpose of fast synchronization.

The COMPARE_AND_EXCHANGE must be atomic, and not the bogus
one in locks.c (do what Pat suggested.)

	- Godmar