WhyEssEff [she/her]

I do the emotes lea-caramelldansen

  • 264 Posts
  • 3.17K Comments
Joined 5 years ago
cake
Cake day: July 25th, 2020

help-circle




  • finished Onikakushi-hen

    this is paranoid hallucinations on Keiichi’s end, right? I think I clocked it at the takoyaki needle. Parallel to razor candy delusions.

    My hanging question is why Keiichi’s death mirrors Tomitake’s. Current theory is mass psychogenic illness? Tomitake prompted Keiichi’s paranoia and then died like that, maybe Tomitake did something to Takano as Keiichi did to Rena and Mion? The superstitions around Watanagashi might be a self-reproducing phenomenon, though I don’t yet know if there’s fallout after the festival like that.

    Going to alternate Umineko and Higurashi now, Umi E5 next so I can stay ahead of Joseph Anderson and keep watching his streams of it without worrying about it influencing my read on the episode





  • If you train a model on data and it outputs in a way you don’t like, and that dislike that is linked to the data itself skewing your output, to fundamentally ‘fix’ it you have to tune the dataset yourself and retrain the model. On Grok’s scale, that’s around a trillion tokens (morphemes, words, punctuation, etc.) that you need to sift through and decide what to manually edit or prune so that the weights work in your favor whilst maintaining generalization otherwise.

    If you publicly source said data/use other updating datasets as dependencies and choose to continue publicly sourcing it in further updates (i.e., keeping your model up to date with the current landscape), then if you want to tune an opinion/view out of existence, it is going to be a Sisyphean task.

    There’s a bandage solution which is fucking with the system prompt, but LLMs are inherently leaky and need active patchwork in such cases to combat jailbreaking.