DYNOMIGHT has some interesting thoughts about AI risk. He spells them out in his post, Y’all Are Over-Complicating These AI-Risk Arguments. (A good read! Read it!)
What stuck out to me, though, isn’t the AI risk part of it; it’s his argument for leveraging simple arguments over complex ones (and, in his case, using AI risk as a case study).
You might think, Duh.
Totally. Smart people love to be smart. And sound smart. And maybe talk smart to other smart people. It’s how you get his iteration of the AI risk complex argument, some version of “Fast takeoff of AI combined with alignment difficulty and orthogonality will funnel the AI itself into dangerous convergent subgoals, giving them a strategic advantage resulting in uncontainable catastrophe.”
But instead, he presents the alternative: “It would be concerning if an alien race all with an IQ of 300 landed on earth, so if you believe an AI with an IQ of 300 could arrive, then it might as well be an alien.”
Simulation Theory
This is exactly how I think about simulation theory: “Is reality a technologically-generated simulation that we’re living in unknowingly?” Using a complex argument, it’s some version of “If technologically advanced civilizations can create vast numbers of conscious simulations and at least some choose to do so, then simulated beings would vastly outnumber original beings, making it statistically far more likely that we ourselves are living in a simulation rather than base reality.”
That is smart. It’s accurate and based in grounded assumptions. It sounds smart.
But I like to think of it more like this: “Do you believe that one day you will be able to play a game where you won’t be able to tell the difference between the game and reality?”