Probably

“The central point is to engage with the unique details we hadn’t considered, not whether we’ll fix an aggregate demand shortfall (yes we will), or whether we’d all die from superintelligence while this is happening (probably), or whether the timing is too fast (it is), or there would be enough compute to make this timeline work and let everyone have a constantly running agent in 2027 (there wouldn’t be). It’s fiction!”

It’s lazy when people say we’ll all “probably” die from superintelligence. Are you trying to be cute?