• 12 Posts
  • 927 Comments
Joined 9 months ago
cake
Cake day: October 7th, 2024

help-circle



  • I work in the screen printing industry and one of the things that is generally very difficult to do without many hours of work is take a photographic image and make it work for the screen printing press. An expert can do it in 15-30 minutes.

    For most non skin-tone images (we generally fall back to heat transfer for human faces), I wrote a GIMP plugin (in Python) that allows you to extract each color of the image into its own layer with an automated process that’s done in a few seconds.

    It’s not as good as the expert result, but it’s also infinitely easier to do. Also, GIMP 3 released like a month after this was finished and I don’t know if I can be bothered to port it.

    Here’s the page for the project: https://github.com/otacon239/inksplit




  • I’m not even into this use of AI.

    Before, you knew that every single asset was placed by hand, and even if it was a prebuilt asset. A human was directly involved with every piece of artwork, dialogue, text, etc.

    Now, you might come across dozens of random text documents or images that are seemingly and vaguely related to the story. How do I as the player know what’s actually relevant? Maybe the AI generated text sends me down a rabbit hole that has nothing to do with the game because it wasn’t proofread.

    These were tasks that, even when menial, allowed for the artist to express themselves all the more. I’m imagining a painter being handed a premixed palette or a sculptor having someone apply the finishing touches for them.

    It just feels like giving up at the finish line. Why do we need a bunch of unrelated text and images of the game stands fine without them?



  • The clever thing that Google Earth can do is load in detail dynamically. When you get close, only a few hundred feet of high res assets need to be loaded to trick the brain into believing it.

    Here, you have only the resolution of the image with a hint of extra depth data from the second perspective. Your brain interprets it to be way more lifelike than a typical image because of the extra dimension, but if you got close, it would remain super blurry/blocky, just like it looks when you pinch to zoom all the way on one side of the image.



  • There’s been a couple documentaries on these schools. It takes students who have been through these programs literal years of dedication to make action happen. And this is with all sorts of evidence.

    If you want a first-hand account of the full experience of a student, along with what they had to go through to get action taken (including being chased out of the country!), you can go to elan.school (cw: strong physical/mental abuse)

    The people behind these schools have mafia levels of power and will literally disappear students back into their schools. If you’re not a previous student, you might have better ability to approach the situation since they don’t know you, but just be careful.


  • I guess I have to put my foot in my mouth over a comment I made a month or so back. I had said an XBOX handheld would sell out against the Steam Deck simply due to brand recognition.

    This was of course before I thought they’d be dumb enough to make a console twice as expensive aimed at the mass market. If this was targeted at the same price as the Deck, they might have had a chance.

    I don’t know anyone that would consider one of these now.