My consideration is mostly that "unstable" is not a fixed version, and that may make it harder to reproduce results.
Well, arguably containers were invented specifically to solve the problem of having deterministic dependencies when running programs, so maybe there is some tooling we can leverage here. For example, instead of pinning to a Debian version (which is still going to change over time, even though released Debian versions are expected to change in more conservative ways than unstable) we can give a permanent tag to each container used in the CI and pin the pipeline to that specific tag (it's not that I expect to update the CI image that often anyway). Would that be better for you?
Testing both the latest versions and the oldest supported versions of dependencies makes sense to me; no disagreement from me there. It does probably imply that MRs bumping the minimum supported versions of dependencies are going to require CI changes.
Yeah. In itself that should be particularly difficult, you would hardly need to do more than update the `Dockerfile` and run `docker build` and `docker push`. As I said the way this is setup is currently not ideal, because the Docker image is in my own namespace on Docker hub. There is apparently some resistance to enable the usage of Docker repositories on this GitLab instance, because of storage/bandwidth concerns. I'm not sure what is the best way forward here, but I would argue that having a less-than-ideally setup CI is still better than no CI at all.