Suppose we are making a computer game in which many things work slightly different when it is night in the game. We can handle this by sprinkling the code with "if night then ... else ...". Alternatively, can make an idea atNight and concentrate all the differences there:
as atNight { as drunk() { // adapt behaviour methods to get into more fights } as hyena() { // adapt to howl more } as ... }
Now drunks and drunks at night are the same individuals, so it is not convenient to use the new idea atNight through extra classes:
class Drunk = drunk. class DrunkAtNight = drunk atNight.
Using such pairs of classes, we could of course do a "renew" for all the drunks in the game when night falls and another one when day breaks, and the same for hyenas and all other affected classes, but that would be tedious.
So instead of using atNight as a component for classes, we activate it as a layer. To do some next step in the game under night-time conditions, we do:
with atNight { next_step() }
For the duration of next_step(), all classes in the system will be temporarily composed with the idea atNight. For many classes, this will of course have no effect at all. But the behaviour of drunks and hyenas etc. will change. (In principle, and as opposed to the effect of "renew" and "asnew", this change only happens for the current thread, but as Tingle currently does not provide threads at all, that is hard to demonstrate.)
Activating layers is a mechanism used in context oriented programming. When applied in idea oriented programming, it changes the meaning of message sends only: it has no influence at all on calls through scope, or on variable accesses. So the next_step() in the example is guaranteed to be the same next_step() that would be called at the same place if its call were not wrapped in a layer activation. If that is not the intended effect, we will need self.next_step().