Here are several examples showing that the inference does work as expected from
a beginner POV,
the first two examples come from one of my student.
Here, list is typed as List<?> instead of List<String>
var value = ...
var list = switch (value) {
case 0 -> List.of();
case 1 -> List.of("foo");
default -> throw new AssertionError();
};
if we add a cast in front of to try to help inference
var list = (List<String>) switch (value) {
case 0 -> List.of();
case 1 -> List.of("foo");
default -> throw new AssertionError();
};
the inference does not care, so it becomes an unsafe cast.
Declaring the type explicitly works
List<String> list = switch (value) {
case 0 -> List.of();
case 1 -> List.of("foo");
default -> throw new AssertionError();
};
Specifying the type argument for List.of() also works.
var list = switch (value) {
case 0 -> List.<String>of();
case 1 -> List.of("foo");
default -> throw new AssertionError();
};
To summarize, the inference considers each yield type one by one instead of all
together (each one adding it's own constraints).
I think we should refine the way the inference work so "it just works" in this
kind of case.
regards,
Rémi