A few predictions:

We are >10 years from being capable of building broadly superintelligent AI, even with real speedups from adding compute and leveraging capabilities of intermediate AI systems. About 75% confidence in the >10 years claim, inside view (before updating on what others think).

I do believe that if anyone builds it, everyone dies is basically true, but with a little bit of error around what everyone dies ends up looking like in practice.

However, I think warning signs will continue up to super intelligence, in other words even near-superintelligent systems will display alignment failures if we pay attention.

Conditional on this not happening, building superintelligence is no less dangerous; but this matters because it gives us more chances to stop increasing capabilities before it is too late.

Now depending on what the momentum is like stopping may be very difficult, as the exponential will probably be steep at this time; for this reason among others, it is always safest to stop now rather than later.