Actually, this is working as intended. But the explanation is a bit complicated.

First, note that the argument of a function is defined by a pattern, and patterns work backwards from expressions: an expression *constructs* a value *inside-out*, whereas a pattern *deconstructs* a value *outside-in*. One consequence is that the subtyping requirements for type annotations in patterns work in the opposite direction from expressions, i.e., the annotation has to be a *subtype* of the pattern it annotates (which makes it a supertype of the scrutinee).

Another way to understand this is that patterns are *contra-variant*, as they can only occur in negative position. This can perhaps be best seen via an example. Consider:

```
type A = {a : Nat};
type B = {a : Nat; b : Nat};
type C = {a : Nat; b : Nat; c : Nat};
```

Now, a function can be written with an outer type annotation:

```
(func (x) { x }) : (B -> B)
```

I can duplicate the type information inside the function:

```
(func (x : B) { x : B }) : (B -> B)
```

Due to subtyping, the outer annotation can also refine the type, in the usual co/contra-variant manner:

```
(func (x : B) { x : B }) : (C -> A)
```

Now, we can also pull in the extra annotation into the function:

```
(func ((x : B) : C) { (x : B) : A }) : (C -> A)
```

and this all type-checks consistently. Note how the subtyping order in the pattern is opposite that in the expression.

The other direction, where the inner type is smaller than the outer, is rejected:

```
// ill-typed
(func (x : C : B) { x : B : A }) : (B -> A)
```

Allowing this direction, with the expected meaning, would be unsound, as @claudio’s example shows.

Now, given that, the next question then is: what is the type of `x`

in such a case? It turns out that the typing rules propagates the outer subtype (i.e., `C`

), not the inner supertype (`B`

). This has to do with coverage checking. If we allowed to “forget” the more precise type of the scrutinee on the way into the pattern, then we would allow weird things, especially in a switch:

```
func f(x : Nat) {
switch x {
case (-1 : Int) {}
...
}
}
```

This would break coverage checking, at least as it works now, since it depends on precise type information.

Hence, if there are two types feeding into a pattern, the typing rules pick the smaller one to propagate inwards.

Now, this has the weird effect that the following actually becomes okay:

```
(func (((x : C) : B) : C) { (x : B) : A }) : (C -> A)
```

The reason is that the we start with the outer `C`

, then see `B`

but keep propagating the smaller `C`

inwards, and so, the inner `C`

is not in conflict. That is essentially the situation from the playground example (where there are three type annotations, though the outermost one is on the outer function’s return type in that case).

In fact, that behaviour allows even odder examples:

```
type D = {c : Nat};
(func (((x : D) : B) : C) { (x : B) : A }) : (C -> A)
```

That type-checks fine as well, because both `B`

and `D`

are supertypes of `C`

, although they are unrelated.

I wouldn’t defend this behaviour on semantic grounds. It merely was simpler. The alternative would be to propagate the supertype, and thereby have typing allow an example like `case ((-1 : Int) : Nat)`

, but have coverage checking do its own type propagation, figuring out that `-1`

is a redundant pattern. Conceptually, that would be nicer, and I’d be open to changing it, but it would be a substantial complication to coverage checking (both in terms of specification and implementation).