Hey everyone, I’m building a new server to run Jellyfin (with a few other services like Pi-hole) and I’m stuck on GPU or CPU transcoding.

My main concern is smooth 4K HDR transcoding for 1 stream. I’ve been reading mixed advice online – some people say a strong CPU with good single-core performance can handle it, while others recommend a dedicated GPU.

Should I focus my budget (~$1000AUD/$658USD) on a good CPU, or spend some of it on a dedicated GPU?

  • @entropicdrift
    link
    English
    11
    edit-2
    12 days ago

    I use QSV hardware acceleration with Low Power h264 and HEVC encoders enabled, plus VPP tonemapping on Jellyfin. I have Prefer OS Native DXVA or VA-API decoders checked (apparently this is needed for VPP tonemapping) and also Enable Tone mapping checked so it can fallback onto OpenCL if VPP doesn’t work. Thread count set to Auto. Preset is set to medium. h265 encoding CRF of 25, h264 encoding CRF of 23 and I have Throttle transcodes enabled, which seems to increase the number of videos I can have transcoding at once due to not transcoding more than necessary whenever a player has enough buffer for the moment.

    This is on the latest Jellyfin linuxserver.io container with the OpenCL-Intel docker mod.

    Also, not sure if this is a factor but I’ve got 16 GB of single channel RAM on it and I use a USB-mounted SSD for my cache and transcode folders. In the past I ran into bandwidth issues by having my transcodes be on the same drive as my media.

    I get 4K HDR to 4K SDR tonemapping plus 7.1 Dolby TrueHD to 2.0 AAC transcoding at 70-75fps with my setup.

    Transcoding of 4K down to lower resolutions is even faster. 4K HDR to 480p SDR runs at 191fps.

    I took a look at the benchmark script those results are from and compared them to the ffmpeg commands auto-generated by my Jellyfin server for actual transcodes.

    Here’s how the command to transcode a 4K HDR 10 bit HEVC with 7.1 AAC audio to 4K SDR h264 with 2.0 AAC audio looks on my machine:

    /usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -init_hw_device vaapi=va:,kernel_driver=i915,driver=iHD -init_hw_device qsv=qs@va -filter_hw_device qs -hwaccel vaapi -hwaccel_output_format vaapi -autorotate 0 -i file:"/Films/Man of Steel (2013)/Man of Steel (2013) Bluray-2160p Proper.mkv" -autoscale 0 -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 h264_qsv -low_power 1 -preset medium -look_ahead 0 -b:v 7616000 -maxrate 7616000 -bufsize 15232000 -g:v:0 72 -keyint_min:v:0 72 -vf "setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale_vaapi=format=nv12:extra_hw_frames=24,hwmap=derive_device=qsv,format=qsv" -codec:a:0 libfdk_aac -ac 2 -ab 384000 -ar 48000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 587 -hls_segment_filename "/transcodes/53a1ea6d1a7a34b888e73230f9ff04e2%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/transcodes/53a1ea6d1a7a34b888e73230f9ff04e2.m3u8"
    
    

    And here’s the command that benchmark runs for 10-bit HEVC:

    /usr/lib/jellyfin-ffmpeg/ffmpeg -y -hide_banner -benchmark -report -c:v hevc_qsv -i /config/ribblehead_4k_hevc_10bit.mp4 -c:a copy -c:v hevc_qsv -preset fast -global_quality 18 -look_ahead 1 -f null - 2>/dev/null
    

    So I’m gonna go out on a limb and say there’s a major difference in configuration between the two. Setting global_quality to 18 is kind of absurdly high quality for hardware h265 encoding. You can easily get away with 28 for “good enough”. My setting of 25 for the CRF of h265 encoding is already edging into placebo territory for most videos. That’s all without considering the impact of low power mode, extra_hw_frames, etc.