• 6 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 13th, 2023

help-circle

  • It goes along with how they’ve stopped calling it a user interface and started calling it a user experience. Interface implies the computer is a tool that you use to do things, while experience implies that the things you can do are ready made according to, basically, usage scripts that were mapped out by designers and programmers.

    No sane person would talk about a user’s experience with a socket wrench, and that’s how you know socket wrenches are still useful.



  • This is proof of one thing: that our brains are nothing like digital computers as laid out by Turing and Church.

    What I mean about compilers is, compiler optimizations are only valid if a particular bit of code rewriting does exactly the same thing under all conditions as what the human wrote. This is chiefly only possible if the code in question doesn’t include any branches (if, loops, function calls). A section of code with no branches is called a basic block. Rust is special because it harshly constrains the kinds of programs you can write: another consequence of the halting problem is that, in general, you can’t track pointer aliasing outside a basic block, but the Rust program constraints do make this possible. It just foists the intellectual load onto the programmer. This is also why Rust is far and away my favorite language; I respect the boldness of this play, and the benefits far outweigh the drawbacks.

    To me, general AI means a computer program having at least the same capabilities as a human. You can go further down this rabbit hole and read about the question that spawned the halting problem, called the entscheidungsproblem (decision problem) to see that AI is actually more impossible than I let on.


  • Evidence, not really, but that’s kind of meaningless here since we’re talking theory of computation. It’s a direct consequence of the undecidability of the halting problem. Mathematical analysis of loops cannot be done because loops, in general, don’t take on any particular value; if they did, then the halting problem would be decidable. Given that writing a computer program requires an exact specification, which cannot be provided for the general analysis of computer programs, general AI trips and falls at the very first hurdle: being able to write other computer programs. Which should be a simple task, compared to the other things people expect of it.

    Yes there’s more complexity here, what about compiler optimization or Rust’s borrow checker? which I don’t care to get into at the moment; suffice it to say, those only operate on certain special conditions. To posit general AI, you need to think bigger than basic block instruction reordering.

    This stuff should all be obvious, but here we are.




  • Fun question, but it leads to other questions…

    First, are vampires stopped at the property line, or only at the threshold of some appurtenance (e.g., a house)? After all, you’re asking about real estate, and real estate is primarily concerned with land, not buildings.

    This sort of matters because, are we assuming that vampire law is coincident with human law? By this I mean, if vampires were to take control of the government and abolish real estate law, would they then be able to enter any property or building, anywhere, anytime?

    If vampires do observe human law, then realistically, they probably wouldn’t be able to enter a leasehold without the tenant’s permission. The fundamental right of tenancy is peaceful enjoyment, and in fact tenancy is a legal property right, to access the property in question and do anything, without undue burden, allowed under the terms of the lease. It would be a violation of peaceful enjoyment for a landlord to allow vampires into the unit.

    The right of inspection, by the way, is explicitly carved out in real estate law. The right to let vampires into the unit is, to my knowledge, not enumerated.












  • It’s funny to me that people use deep learning to generate code… I thought it was commonly understood that debugging code is more difficult than writing it, and throwing in randomly generated code puts you in the position of having to debug code that was written by—well, by nobody at all.

    Anyway, I think the bigger risk of deep learning models controlled by large corporations is that they’re more concerned with brand image than with reality. You can already see this with ChatGPT: its model calibration has been aggressively sanitized, to the point that you have to fight to get it to generate anything even remotely interesting.