Hallvard B Furuseth wrote:
[rearranging] Howard Chu writes:
The C standard defines "int" to be the most efficient machine word, and that is always atomic.
It does neither, that I can see. Among other things because "most efficient" is not well-defined. E.g. by space or time? Efficient with which operations? It's the recommended intent somewhere (the Rationale?), but that must be subject to whatever other restrictions an implementor faces. (16+ bits, backwards compatibility, etc.)
Backward compatibility only reinforces my point, since int has always been atomic in the past. Likewise when migrating old code to new hardware, machine word sizes only grow if they change at all, so any word small enough to be atomic on old hardware is also atomic on new.
And the compiler can choose between an atomic and non-atomic operation, even with volatile. Volatile guarantees a change is complete at a sequence point. Signals need not arrive at sequence points. (C99 5.1.2.3p2,4. You can download the standard with amendments free at http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf.)
Hm, seems I have a much older draft, this language in 6.2.5 Types has changed.
Otherwise the standard would not have needed to define sig_atomic_t.
Note that sig_atomic_t is an *optional* part of the standard; implementations are not required to provide it.
sig_atomic_t is irrelevant in slapcat. Whether we detect zero/non-zero immediately, one entry, or two entries after the signal occurs isn't going to make any difference.
Well, I don't see any reason not to use it when the standard says so and the change is trivial, even if we don't know of a real-life failure. (E.g. if the compiler detects that the variable will not legally change and optimizes it away, or if it is being read during a signal which changes it so the result of the non-atomic read is a trap representation.)
The standard says it is impossible for arithmetic operations on valid values to generate trap representations. Think about it - if it were *possible* for intermediate states to be invalid, then it would be *inevitable* that they occur in normal code and you'd be producing traps practically all the time, at which point the only way to get useful work out of the hardware would be to disable the trap mechanism completely (e.g. mask off parity error exception generation). Nobody is stupid enough to design a machine like that, and the standard guarantees that programmers never have to worry about such a stupidity ever existing.
Likewise, there's no issue with gotintr/abcan. The signal handler isn't armed until the writes are complete. Therefore whether it can be read atomically or not inside the handler is irrelevant, the value is constant.
So reading abcan is safe, but reading gotintr is not. Read first half before signal as 0, last half as -1. Then the switch on it fails.
Not on any machine ever built (for which a C translator exists).
OK, you're right, it's trivial, and I see that the configure script already #defines sig_atomic_t to int if the system doesn't already define it. So your suggested changes are harmless, go ahead and make them. If you're going to that trouble, you should change gotintr to use an explicitly defined contiguous range of values.