Hi, I’m Eric and I work at a big chip company making chips and such! I do math for a job, but it’s cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.

My pfp is Hank Azaria in Heat, but you already knew that.

  • 8 Posts
  • 241 Comments
Joined 1 year ago
cake
Cake day: January 22nd, 2024

help-circle
  • Kind of knew that after Claude plays pokemon went semi viral, it was going to immediately get goodhart’d. i also saw the usual doomers be like BY END OF YEAR AGENTS WILL BEAT POKEMON, which I thought was a severe underestimate at the time- they were undoubtably basing their projection based off the Anthropic people who posted a little chart showing how far each version of Claude made it, waiting for pokemon playing skill to emerge from larger and larger models, instead of thinking, hmm they are iteratively refining the customized tools as it gets stuck. Then after Gemini ‘beat’ the game I read a disappointed response from an RL guy that said after trying to replicate the results, they concluded Googe’s set up was basically 90% harness for the model, 10% model despite the Google team basically implying it was raw pixels-to-action.





  • I couldn’t find further holes in it

    Here’s a couple:

    1. iirc it claims we’ll have reliable “agents” in mid 2025. Fellas it’s almost June in the year of the “agents” and frankly I don’t see shit. We are not starting strong here.
    2. they predict a 10k person anti-AI protest in DC. For context, the recent “Hands Off” protest in DC saw 100k person turnout. Israel / Palestine protest saw 300K in DC in 2023. A ten-thousand-person protest isn’t really anything out of the ordinary? It’s almost like the authors have never been to a protest, don’t understand collective action because they live in a bubble or something? But they assure us, this document is thoroughly researched maybe their point was self-deprecating, “woe is us, only 10K people show up :(”
    3. When they get into their super agi fanfic, they describe Agent-n as “never stops training” continuously learning from the environment. Like the only way I read this is that somehow, we discover paradigm shifting algorithmic discoveries by coincidence in the next couple years that make DL obsolete so we can abandon train-inference approaches and instead have this embodied entity that is constantly taking feedback from the environment to “train” but the system itself is still described under the massive data center heavy DL framework. It’s like they know that bio intelligence has this continuous feedback mechanism, so obviously ai researchers will just patch that in, how hard can it be?
    4. Ong, i swear they just put in there at some point “hallucinations are solved” the thing they have been claiming will be solved in the next month since 2023.

  • Daniel Kokotlajo, the actual ex-OpenAI researcher

    Unclear to me what Daniel actually did as a ‘researcher’ besides draw a curve going up on a chalkboard (true story, the one interaction I had with LeCun was showing him Daniel’s LW acct that is just singularity posting and Yann thought it was big funny). I admit, I am guilty of engineer gatekeeping posting here, but I always read Danny boy as a guy they hired to give lip service to the whole “we are taking safety very seriously, so we hired LW philosophers” and then after Sam did the uno reverse coup, he dropped all pretense of giving a shit/ funding their fan fac circles.

    Ex-OAI “governance” researcher just means they couldn’t forecast that they were the marks all along. This is my belief, unless he reveals that he superforecasted altman would coup and sideline him in 1998. Someone please correct me if I’m wrong, and they have evidence that Daniel actually understands how computers work.














  • Also, man why do I click on these links and read the LWers comments. It’s always insufferable people being like, “woe is us, to be cursed with the forbidden knowledge of AI doom, we are all such deep thinkers, the lay person simply could not understand the danger of ai” like bruv it aint that deep, i think i can summarize it as follows:

    hits blunt “bruv, imagine if you were a porkrind, you wouldn’t be able to tell why a person is eating a hotdog, ai will be like we are to a porkchop, and to get more hotdogs humans will find a way to turn the sun into a meat casing, this is the principle of intestinal convergence”

    Literally saw another comment where one of them accused the other of being a “super intelligence denier” (i.e., heretic) for suggesting maybe we should wait till the robot swarms coming over the hills before we declare its game over.