EnsignRedshirt [he/him]

  • 0 Posts
  • 80 Comments
Joined 4 years ago
cake
Cake day: July 26th, 2020

help-circle


  • Bill Burr is a surprisingly thoughtful and principled guy with consistently good opinions. He’s a comedian, and he doesn’t have any theory underpinning his worldview, but I bet if you look at why he’s been criticized in the past it’s by liberals who are mad that he’s being critical of liberals. I’m not at all surprised that he lit up Bill Maher on his boomer-ass Israel-Palestine takes.






  • Properly-designed tools with good data will absolutely be useful. What I like about this analogy with the talking dog and the braindead CEO is that it points out how people are looking at ChatGPT and Dall-E and going “cool, we can just fire everyone tomorrow” and no you most certainly can’t. These are impressive tools that are still not adequate replacements for human beings for most things. Even in the example of medical imaging, there’s no way any part of the medical establishment is going to allow for diagnosis without a doctor verifying every single case, for a variety of very good reasons.

    There was a case recently of an Air Canada chatbot that gave bad information to a traveler about a discount/refund, which eventually resulted in the airline being forced to honor what the chatbot said, because of course they have to honor what it says. It’s the representative of the company, that’s what “customer service representative” means. If a customer can’t trust what the bot says, then the bot is useless. The function that the human serves still needs to be fulfilled, and a big part of that function is dealing with edge-cases that require some degree of human discretion. In other words, you can’t even replace customer service reps with “AI” tools because they are essentially talking dogs, and a talking dog can’t do that job.

    Agreed that ‘artificial intelligence’ is a poor term, or at least a poor way to describe LLM. I get the impression that some people believe that the problem of intelligence has been solved, and it’s just a matter of refining the solutions and getting enough computing power, but the reality is that we don’t even have a theoretical framework for how to create actual intelligence aside from doing it the old fashioned way. These LLM/AI tools will be useful, and in some ways revolutionary, but they are not the singularity.










  • The question doesn’t necessarily rely on a post-communist society. Assuming so just makes it easier to answer by eliminating some obvious objections, like that they’d have the global financial system forced on them or inevitably become dispossessed and marginalized, all the things that exposure to capitalism does.

    The question I have is more about whether there are conditions where non-contact becomes the more ethically dubious position. It seems clear that they don’t want visitors, but if they were suffering greatly or faced existential danger, it would get a lot harder to maintain a non-interference position as you start recognizing that interfering can’t possibly be worse than death.


  • I wonder, in a hypothetical scenario where we achieve global communism, would it still be appropriate to maintain no contact? Let’s assume for argument’s sake that we can get around the practical issues like disease, would we not owe them some form of consideration? As it stands, I feel like contact with the rest of the world would only make their lives worse and probably end their civilization as they know it, but if we had a far more just and equitable society, would refusing to engage start to resemble a form of chauvinism? Or at least neglect?

    I’m honestly not sure what the answer is, or if I’m just wrong and the answer is simpler than I’m making it out to be. I feel like it’s easy to argue for no contact, for a variety of reasons, but is there a point at which non-interference starts to look like a form of captivity?