I’m reading Superintelligence, a surprisingly dull book given its topic is, well, the possibility of a rampant superintelligent AI destroying civilization and what we might do to prevent that sort of thing. Along the way, the author speculates about how a caged hyperintelligent AI (not joking here) might incorporate reasoning about the the simulation hypothesis into its plans to escape confinement.
I received an email recently from a guy, let’s call him Robert Smith (name changed to protect the innocent), whose email display name was
"Smith, Robert" <firstname.lastname@example.org>.
I worry a bit that people are adopting an overly fatalist perspective on tech adoption. This perspective has become almost a reflex–we can’t possibly know or predict what technology will end up being widely adopted and often the “worse” technology wins, so just don’t even try!
I’ve been contemplating internet debates and discussions on programming technology and I think the music composers really have the right idea:
Here’s something pretty crazy: there are around 10^28 atoms in the human body. An astromically tiny percentage of arrangements of 10^28 atoms in 3D space correspond to a viable human. Most arrangements of atoms would just melt into a puddle!