• 5 Posts
  • 173 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle

  • Actually, as some of the main opponents of the would-be AGI creators, us sneerers are vital to the simulation’s integrity.

    Also, since the simulator will probably cut us all off once they’ve seen the ASI get started, by delaying and slowing down rationalists’ quest to create AGI and ASI, we are prolonging the survival of the human race. Thus we are the most altruistic and morally best humans in the world!


  • Yeah, the commitment might be only a token amount of money as a deposit or maybe even less than that. A sufficiently reliable and cost effective (which will include fuel costs and maintenance cost) supersonic passenger plane doesn’t seem impossible in principle? Maybe cryptocurrency, NFTs, LLMs, and other crap like Theranos have given me low standards on startups: at the very least, Boom is attempting to make something that is in principle possible (for within an OOM of their requested funding) and not useless or criminal in the case that it actually works and would solve a real (if niche) need. I wouldn’t be that surprised if they eventually produce a passenger plane… a decade from now, well over the originally planned budget target, that is too costly to fuel and maintain for all but the most niche clientele.


  • I just now heard about here. Reading about it on Wikipedia… they had a mathematical model that said their design shouldn’t generate a sonic boom audible from ground level, but it was possible their mathematical model wasn’t completely correct, so building a 1/3 scale prototype (apparently) validated their model? It’s possible their model won’t be right about their prospective design, but if it was right about the 1/3 scale then that is good evidence their model will be right? idk, I’m not seeing much that is sneerable here, it seems kind of neat. Surely they wouldn’t spend the money on the 1/3 scale prototype unless they actually needed the data (as opposed to it being a marketing ploy or worse yet a ploy for more VC funds)… surely they wouldn’t?

    iirc about the Concorde (one of only two supersonic passenger planes), it isn’t so much that supersonic passenger planes aren’t technologically viable, its more a question of economics (with some additional issues with noise pollution and other environmental issues). Limits on their flight path because of the sonic booms was one of the problems with the Concorde, so at least they won’t have that problem. And as to the other questions… Boom Supersonic’s webpage directly addresses these questions, but not in any detail, but at least they address them…

    Looking for some more skeptical sources… this website seems interesting: https://www.construction-physics.com/p/will-boom-successfully-build-a-supersonic . They point out some big problems with Boom’s approach. Boom is designing both its own engine and it’s own plane, and the costs are likely to run into the limits of their VC funding even assuming nothing goes wrong. And even if they get a working plane and engine, the safety, cost, and reliability needed for a viable supersonic passenger plane might not be met. And… XB-1 didn’t actually reach Mach 2.2 and was retired after only a few flight. Maybe it was a desperate ploy for more VC funding? Or maybe it had some unannounced issues? Okay… I’m seeing why this is potentially sneerable. There is a decent chance they entirely fail to deliver a plane with the VC funding they have, and even if they get that far it is likely to fail as a commercially viable passenger plane. Still, there is some possibility they deliver something… so eh, wait and see?





  • Loose Mission Impossible Spoilers

    The latest Mission Impossible movie features a rogue AI as one of the main antagonists. But on the other hand, the AI’s main powers are lies, fake news, and manipulation, and it only gets as far as it does because people allow fear to make themselves manipulable and it relies on human agents to do a lot of its work. So in terms of promoting the doomerism narrative, I think the movie could actually be taken as opposing the conventional doomer narrative in favor of a calm, moderate, internationally coordinated (the entire plot could have been derailed by governments agreeing on mutual nuclear disarmament before the AI subverted them) response against AI’s that ultimately have only moderate power.

    Adding to the post-LLM hype predictions: I think post LLM bubble popping, “Terminator” style rogue AI movie plots don’t go away, but take on a different spin. Rogue AI’s strength’s are going to be narrower, their weaknesses are going to get more comical and absurd, and idiotic human actions are going to be more of a factor. For weaknesses it will be less “failed to comprehend love” or “cleverly constructed logic bomb breaks its reasoning” and more “forgets what it was doing after getting drawn into too long of a conversation”. For human actions it will be less “its makers failed to anticipate a completely unprecedented sequence of bootstrapping and self improvement” and more “its makers disabled every safety and granted it every resource it asked for in the process of trying to make an extra dollar a little bit faster”.


  • He’s set up a community primed to think the scientific establishment’s focus on falsifiablility and peer review is fundamentally worse than “Bayesian” methods, and that you don’t need credentials or even conventional education or experience to have revolutionary good ideas, and strengthened the already existing myth of lone genii pushing science forward (as opposed to systematic progress). Attracting cranks was an inevitable outcome. In fact, Eliezer occasionally praises cranks when he isn’t able to grasp their sheer crankiness (for instance, GeneSmith’s ideas are total nonsense for anyone with more familiarity with genetics than skimming relevant-sounding scientific publications and garbage pop-sci journalism, but Eliezer commented favorably). The only thing that has changed is ChatGPT and it’s clones glazing cranks first making them even more deluded. And of course, someone (cough Eliezer) was hyping up ChatGPT as far back as GPT-2, so it’s only to be expected that cranks would think LLMs were capable of providing legitimate useful feedback.

    Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

    He’s deliberately cultivated an audience willing to hear cranks out, so this is exactly what he deserves.


  • This connection hadn’t occured to me before, but the Starship Troopers scenes (in the book) where they claim to have mathematically rigorous proofs about various moral statements or actions or societal constructs reminds me of how Eliezer has a decision theory in mind with all sorts of counter intuitive claims (it’s mathematically valid to never ever give into any blackmail or threats or anything adjacent to them), but hasn’t actually written out his decision theory in rigorous well defined terms that can pass peer review or be used to figure out anything beyond some pre-selected toy problems.





  • No, I think BlueMonday is being reasonable. The article has some quotes from scientists with actually relevant expertise, but it uncritically mixes them with LLM hype and speculation in a typical both sides sort of thing that gives lay readers the (false) impression that both sides are equal. This sort of journalism may appear balanced, but it ultimately has contributed to all kinds of controversies (from Global Warming to Intelligent Design to medical pseudoscience) where the viewpoints of cranks and uninformed busybodies and autodidacts of questionable ability and deliberate fraudsters get presented equally with actually educated and researched viewpoints.


  • A new LLM plays pokemon has started, with o3 this time. It plays moderately faster, and the twitch display UI is a little bit cleaner, so it is less tedious to watch. But in terms of actual ability, so far o3 has made many of the exact same errors as Claude and Gemini including: completely making things up/seeing things that aren’t on the screen (items in Virdian Forest), confused attempts at navigation (it went back and forth on whether the exit to Virdian Forest was in the NE or NW corner), repeating mistakes to itself (both the items and the navigation issues I mentioned), confusing details from other generations of Pokemon (Nidoran learns double kick at level 12 in Fire Red and Leaf Green, but not the original Blue/Yellow), and it has signs of being prone to going on completely batshit tangents (it briefly started getting derailed about sneaking through the tree in Virdian Forest… i.e. moving through completely impassable tiles).

    I don’t know how anyone can watch any of the attempts at LLMs playing Pokemon and think (viable) LLM agents are just around the corner… well actually I do know: hopium, cope, cognitive bias, and deliberate deception. The whole LLM playing Pokemon thing is turning into less of a test of LLMs and more entertainment and advertising of the models, and the scaffold are extensive enough and different enough from each other that they really aren’t showing the models’ raw capabilities (which are even worse than I complained about) or comparing them meaningfully.




  • To elaborate on the other answers about alphaevolve. the LLM portion is only a component of alphaevolve, the LLM is the generator of random mutations in the evolutionary process. The LLM promoters like to emphasize the involvement of LLMs, but separate from the evolutionary algorithm guiding the process through repeated generations, LLM is as likely to write good code as a dose of radiation is likely to spontaneously mutate you to be able to breathe underwater.

    And the evolutionary aspect requires a lot of compute, they don’t specify in their whitepaper how big their population is or the number of generations, but it might be hundreds or thousands of attempted solutions repeated for dozens or hundreds of generations, so that means you are running the LLM for thousands or tens of thousands of attempted solutions and testing that code against the evaluation function everytime to generate one piece of optimized code. This isn’t an approach that is remotely affordable or even feasible for software development, even if you reworked your entire software development process to something like test driven development on steroids in order to try to write enough tests to use them in the evaluation function (and you would probably get stuck on this step, because it outright isn’t possible for most practical real world software).

    Alphaevolve’s successes are all very specific very well defined and constrained problems, finding specific algorithms as opposed to general software development