Libertarians believe that they should have the benefits of society, without any of the responsibilities that come with it.
Libertarians believe that they should have the benefits of society, without any of the responsibilities that come with it.
In your comment, who is they?
I’m pretty sure the Taliban could also be described as a “hillbillies with guns” when they started out. And you know what, they won.
He wasn’t even good for the German economy though, the Nazis produces a large GDP growth through massive military spending, they bankrupted the country well before the war was over, and had they won the war, the German economy would have crashed immediately.
Second amendment grants the right to bear arms, arms were used by the insurgents in Iraq and Afghanistan.
Eh, Iraq and Afghanistan went rather poorly for the United States.
Option 2 is suicide. I guess that’s it for American Democracy. Of course, option 3 being that the Democrats win every election until the Republican party collapses. At which point the Democratic party will likely split, with one part becoming a moderate party, and the other half absorbing the remains of the Republican party.
Good on you.
I didn’t get to talk to the owner.
I live in Germany, and I spotted one of these trucks recently. It looked huge compared to every other vehicle on the road, and one of those was a delivery van. And it was too big for its parking spot. It also had a confederate flag in the back window.
I’m not sure we are discussing the same aspect of this mind experiment, and in particular the aspect of it that i find lovecraftian is that you may already be in the simulation right now. This makes the specific circumstances of our world, physics, and technology level irrelevant, as they would just be a solipsistic setup to test you on some aspect of your morality. The threat of eternal torture, on the other hand, would only apply to you if you were the real version of you, as that’s who the basilisk is actually dealing with. This works because you don’t know what of the two situations is your current one.
Wondering whether you are in a simulation or not is rather unproductive, as there’s basically nothing we can do about it regardless of what the answer is. It’s basically like wondering whether god exists or not. In the absence of clearly supernatural phenomena, the simpler explanation is that we are not in a simulation, as any universe which can produce the simulation is by definition at least as complex as the simulation. The definition I’m applying here is that the complexity of a string is its length or the length of the shortest program that produces it. Like, yes, we could be living in a simulation right now, and deities could also exist.
The song “Seele Mein” (engl: “My Soul” or “Soul is Mine”) is a about a demon who follows a mortal from birth to death and then carries off the soul for eternal torture. Interestingly, the song is from the perspective of the demon, and they gloss over the life of the mortal, spending more than half of the song on describing the torture. Could such demons exist? Certainly, there’s nothing that rules out their existence, but there’s also nothing indicating that they exist. So they probably don’t. And if you are being followed around by such a demon? Then you’re screwed. Theoretically, every higher being that has been though off could exist. A supercomputer simulating our reality falls squarely into the category of higher being. Unless we observe things are clearly caused by such a being, wondering about their existence is pointless.
The idea behind Roko’s Basilisk is as follows: Assume a good AGI. What does that mean? An AGI that follows human values. And since the idea originated on Less Wrong, this means utilitarianism. And it also means that we’re dealing with a superintelligence, since on Less Wrong, it’s generally assumed that we’re going to see a singularity once true AGI is reached. Because the AGI will just upgrade itself until its superintelligent. Afterwards it will bring about paradise, and thus create great value. The idea is now that it might be prudent for the AGI to punish those who knew about it, but didn’t do everything in their power to bring it to existence. Through acausal trade, the this would cause the AGI to come into existence sooner, as the people would work harder to bring it into existence for fear of torture. And what makes this idea a cognitohazard is that by just knowing about it, you make yourself a more likely target. In fact, people who don’t know about it, or dismiss the idea are safe, and will find a land of plenty once the AGI takes over.
Of course, if the AGI is created in, let’s say, 2045, then nothing the AGI can do will cause it to be created in 2044 instead.
Roko’s Basilisk hinges on the concept of acausal trade. Future events can cause past events if both actors can sufficiently predict each other. The obvious problem with acausal trade is that if you’re the actor B in the future, then you can’t change what the actor A in the past did. It’s A’s prediction of B’s action that causes A’s action, not B’s action. Meaning the AI in the future gains literally nothing by exacting petty vengeance on people who didn’t support their creation.
Another thing Roko’s Basilisk hinges on is that a copy of you is also you. If you don’t believe that, then torturing a simulated copy of you doesn’t need to bother you any more than if the AI tortured a random innocent person. On a related note, the AI may not be able to create a perfect copy of you. If you die before the AI is created, and nobody scans your brain (Brain scanners currently don’t exist), then the AI will only have the surviving historical records of you to reconstruct you. It may be able to create an imitation so convincing that any historian, and even people who knew you personally will say it’s you, but it won’t be you. Some pieces of you will be forever lost.
Then a singularity type superintelligence might not be possible. The idea behind the singularity is that once we build an AI, the AI will then improve itself, and then they will be able to improve itself faster, thus leading to an exponential growth in intelligence. The problem is that it basically assumes that the marginal effort of getting more intelligent grows slower than linearly. If the marginal difficulty grows as fast as the intelligence of the AI, then the AI will become more and more intelligent, but we won’t see an exponential increase in intelligence. My guess would be that we’d see a logistical growth of intelligence. As in, the AI will first become more and more intelligent, and then the growth will slow and eventually stagnate.
Just as fast as a car if you run as fast as a car.
When interpreting the comic, I find it interesting to keep in mind that a wolf pack is a family unit, consisting of parents of children. So the wolf is taking the property for his family. The comic is advocating banditry, basically.
What they’re saying is that all rights are derived from force. The state that enforces your rights uses force to do so. This comic is mostly dunking on anarcho capitalists, in that they seemingly believe that property rights are magic.
Given Trump’s track record for keeping contracts, let alone promises, I doubt he has a good track record with returning favors.
Thanks for your reply. Are his insurance premiums going to go up?
What about the guy who’s space yacht you stole. Was he another player or an NPC? If he was another player, will he have to buy a new space yacht for real money?
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
The image depicts mature women, not children.
I’d say, whatever you do, it has to be obvious that the librarians are innocent. So I’d say ‘accidentally’ forgetting the stacks in the wrong section is out.