Language: Deutsch, English

The Thriving Hunt for 4K/UHD/2160p

Recent improvements of the Linux kernel make it possible: Enjoying a full 4K/UHD/2160p resolution of 3840x2160 pixels or more even with elder hardware. Seen from the viewpoint of your CPU performance a Core 2 Duo or Core 2 Quad system will be fully sufficient to handle such a resolution at least for viewing photos and for normal office use. Many graphics cards which have never been advertised to feature 4K/UHD can be made to display such modes by overclocking your TMDS (Transition Minimized Differential Signaling - i.e. the clock frequency of your HDMI or DVI output).

Even a little, middle aged Intel Atom notebook can be made display 2560x1440 over its VGA connector - all of it without overclocking or any special tricks that would require one of the newer kernels.

Reading this article you will enjoye higher graphics modes under Linux, soon. - and you will never wish the old 1080p called 'Full HD' back. In our opinion higher graphics modes are simply a pleasure - not just because of the smooth shape of letter and the pin-sharp display of photos. You will soon realize that there is much more text that fits on your monitor - an important issue, especially for people who program, work with large texts or who simply want to get an overview of their email inbox.

Get the Right Hardware

The first thing you will need is a 4K/UHD capable monitor like our AOC u2868pqu here. This monitor was available new for no more than 398.50€ at the time of purchase (2015-11-06) or for about 300€ used when writing this article (2016-03-07). It allows for a maximum frequency of 30Hz in 3840x2160 mode when powered over the HDMI port while it would be possible to achieve 60Hz over the DisplayPort. Respective adapters that convert a HDMI 2.0 signal into a DisplayPort signal were hardly available at the beginning of 2016 (most adapters convert into the other direction; the #HD2DP from startech.com is an exception.).

For optimal smoothness of moving parts like the mouse pointer 60Hz would be recommended though we can use a trick to make things as smooth as with 60Hz when we just have 30Hz available: interlaced graphics modes where at first all even lines are refreshed, then all odd lines and vice versa. This even though 25 frames or pictures per second are said to be enough for smooth motions. A newer monitor of the same series, the AOC u3277pqu would support 60Hz over HDMI 2.0. Nonetheless HDMI 2.0 is something that is only available with the newest graphics cards so that we will most likely have to suffice with a clock rate of 30Hz for elder hardware anyway (or effective 60Hz with interlaced modes).

Concerning the price of the hardware the better Core 2 notebooks cost at least 200€ like the Fujitsu Siemens Xi 3650 with an NVIDIA Corporation G96M [GeForce 9600M GT] graphics card featuring 3840x2160 with a TMDS frequency of 225Mhz yielding a screen refresh rate of at most 24Hz. This is somewhat below 30Hz, not a default refresh rate and therefore a frequency which is not supported by all 4K monitors. However the AOC u2868pqu supports many non-standard modes as we will see shortly and should thus be worth its recommendation. Those of you who will need the 30Hz mode for ultimate monitor compatibility may f.i. prefer a Fujitsu Siemens Celsius H265/H270 business notebook with an Nvidia Geforce G96GLM [Quadro FX 770M] card. Its DVI output port can be happily converted into HDMI with a respective adapter allowing 4K/30Hz modes. Nonetheless using a new-enough DVI cable the u2868pqu can yield as good resolutions via DVI as via HDMI. Both notebooks have eSATAp, an SDHC card reader and an ExpressCard slot which can be used for support of USB3.0.

Core 2 and newer desktop computers are of course cheaper than notebooks. They can be made 4K/UHD-ready very easily by the right graphics card. As far as we have tested it many UHD/4K-capable graphics cards cause problems with suspend, s2ram or s2disk (hibernation) at least when used with various Core 2 Fujitsu hardware (we severly doubt that any other systems of the same age would perform any better.). A card that worked for us in any setting yielding 3840x2160@30Hz was the Radeon R5 230 Core 2GB DDR3 PCIe as available by XFX or other manufacturers. Unfortunately other cards caused s2ram problems: the Radeon R7 240 Core 2GB DDR3 PCIe, the Nvidia GeForce GT 720 GDDR5 or the GeForce 730. The GDDR5 card did additionally have a heat problem. All of these cards are cooled passively; - that means no additional noise.

Very pleasurable about the R5 230 card is its flat cooling grid that allows for single slot height and thus full usage of all other PCIe slots on your board. It has a VGA, a DVI and a HDMI output and can also be installed on desktops requiring half height, SFF (small form factor) PCIe cards. For most of the Core 2 hardware you may very likely want to have an USB3.0 PCIe card as well if you would not suffice with an eSATAp slot bracket. The Silverston SST-EC04-P (also SFF compatible) did work well for us. One of its features is an internal USB3.0 port which is very practical if you want an SDHC/SDXC and CompactFlash card reader as available in the size of a floppy drive. That way you will be able to access your photos very quickly.

One last say about the hardware: Check your HDMI cables twice; elder cables may not allow for the transmission of 4K modes at all. We have been using HDMI 2.0 cables in all settings though basically any cable that supports HDMI 1.4 should have sufficed as well.

User Defined Graphics Modes

Better Modes over HDMI/DVI: using the MacBook 3,1

The first computer we will have a look at is an old MacBook 3,1 refurbished to use Debian stable Linux. Though it shall at least support Full HD for our external monitor this mode is not provided and selected automatically. Though such an issue is rather seldom these days we will show you how to make it run in Full HD and how to power an external monitor with 2560x1600_30 using this noteboook.

Look at the following command line:

~> xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 8192 x 8192 LVDS1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 286mm x 178mm 1280x800 59.91 + 1024x768 60.00* 800x600 60.32 56.25 640x480 59.94 DVI1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 621mm x 341mm 1680x1050 59.88 1280x1024 75.02 60.02 1440x900 59.90 1280x960 60.00 1280x720 59.97 1024x768 75.08 70.07 60.00* 832x624 74.55 800x600 72.19 75.00 60.32 56.25 640x480 75.00 72.81 66.67 60.00 720x400 70.08 ~> cvt --reduced 1920 1080 60 # 1920x1080 59.93 Hz (CVT 2.07M9-R) hsync: 66.59 kHz; pclk: 138.50 MHz Modeline "1920x1080R" 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync ~> xrandr --newmode "1920x1080R" 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync ~> xrandr --addmode DVI1 1920x1080R ~> xrandr --output DVI1 --mode 1920x1080R --right-of LVDS1 ~> xrandr DVI1 connected 1920x1080+1024+0 (normal left inverted right x axis y axis) 621mm x 341mm … 1920x1080R 59.93* …

At first we query all screen settings as they are in effect just now. Then we let cvt produce a monitor modeline which you can either include in your xorg.conf or use on the commmand line directly by defining a new mode with xrandr and adding that mode as available for the DVI output. Finally you will need to activate this new mode to test whether it would work in practice. The --reduced parameter reduces blanking i.e. reduces the invisble afterrun of the 'ray' when going to the next line so that a little bit a higher vertical frequency shall be achievable by the same hardware. Generally it is better to use gtf instead of cvt for newer TFTs though the difference in the produced modelines is according to our experience very little. Under certain circumstances you may get even better results with the arachnoid modeline calculator. Besides this it additionally allows for interlaced modes featuring an effective frequency twice as high as without.

~> ./newmode --gtf DVI1 2560 1600 30 --output DVI1: 2560x1600 mode "2560x1600_30.00": 164.10 2560 2688 2960 3360 1600 1601 1604 1628 -HSync +Vsync xrandr --output DVI1 --mode 2560x1600@30 ~> xrandr --output DVI1 --mode 1920@1080R --right-of LVDS1 ~> xrandr --output DVI1 --mode 2560x1600@30 --right-of LVDS1

Basically it is just the same procedure again for 2560x1440 and 2560x1600. In order to make trying out new resolutions simpler and easier for you we have provided the newmode script for you which combines all three necessary steps from above into one when defining a new mode. You could specify a couple of other frequencies separated by space after the 30 as well. This can be useful in order to try out the highest achievable frequency. Before setting a new mode make sure that you can reset your screen to the old mode in some way. Do so by first setting the actual and working graphics mode again which is actually a NOP (no-operation). Then if something goes wrong press two times the up-key to recall the history and then return.

As you do not want to repeat these steps every time you boot see for the ready-prepared xorg.conf.MacBook3,1 which you will need to place as /etc/X11/xorg.conf or see for the newmode script at the bottom.

Better Modes over VGA: using a PB Dots E2 Atombook

Achieving higher graphics modes over the VGA connector is a bit more tricky as it may be necessary to assemble your own modeline. You may read the description at archnoid (link is at the table below) on how to do this in detail. A modeline basically has the following form:

ModeLine ModeName DotClock ScreenX HSyncStart HSyncEnd HTotal ScreenY VSyncStart VSyncEnd vTotal

ModeName is the name of the new graphics mode as you will type it for xrandr or see it in your favourite monitor configuration tool. ScreenX and ScreenY will make up the visible resolution on the screen. ScreenX < HSyncStart < HSyncEnd < HTotal are monotonically increasing values as then used for the horizontal synchronization of the cathod ray of your tube when it reverts to go to the next line. Modern LCD/TFT monitors are still signalled in a similar way. A similar relationship exists between the vertical ray synchronization components. The smaller the 'synchronization area' you specify the lower can be the resulting frequency pixels are drawn with (also: dot-clock) making space for better refresh rates (measurable in fps):

DotClock = RefreshRate in Hz * HTotal * VTotal / 1,000,000

… whereby the RefreshRate in Hz is similar to the number of frames finally delivered per second, as we have already been talking about it; i.e. 30Hz, 60Hz etc.

Now if your image appears too far at the right try to add multiples of eight to HSyncStart and HSyncEnd. Proceed similarely with VSynStart and VSyncEnd adding smaller increments if the image is too far at the bottom; - or subtract to move the picture right into the other direction.

An image stretched too widely requires adding small values to HTotal. Nonetheless for the higher modes you may at first need a 'good guess' in order to obtain an image at all. In order to facilitate your guessing we have prepared a few bash shell procedures which will absolve you from the task of repeatedly having to calculate the dot-clock frequency:

~> cat modehack delmod() { xrandr --delmode VGA-1 "$1"; xrandr --rmmode "$1"; } addmod() { xrandr --newmode "$@"; xrandr --addmode VGA-1 "$1"; } chmod() { delmod "$1"; addmod "$@"; } newmod2() { xrandr --newmode "$1" $(bc <<<"scale=2; (30*$6*${10}+5000)/1000000") $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13}; } newmod() { echo xrandr --newmode "$1" $(bc <<<"scale=2; ($2*$6*${10}+5000)/1000000") $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13}; xrandr --newmode "$1" $(bc <<<"scale=2; ($2*$6*${10}+5000)/1000000") $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13}; } addmod() { newmod "$@"; xrandr --addmode VGA-1 "$1"; } ~> source modehack ~> addmod 2560x1440 30 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync xrandr --newmode 2560x1440 122.00 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync ~> chmod 2560x1440 50 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync xrandr --newmode 2560x1440 203.33 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync ~> xrandr --output VGA-1 --mode 2560x1440

At first change the 'VGA-1' to the identifier of the output channel for your monitor inside the modehack script. Then load it with the source command. Finally add a new mode with addmod and later on change a mode with the same name as before by the chmod command. Not only can you try out the mode now with xrandr but also can you directly acquire the output line of addmod/chmod into your xorg.conf. We have prepared a file with the latter configuration you can see above which is known to work for the u2868pqu monitor into xorg.conf.intel-VGA (to be renamed as /etc/X11/xorg.conf). The new settings in xorg.conf will be honored as soon as you reboot or as soon as you restart your display manager. Quit all graphical programs before doing so and then log out of your display manager. From there press [Ctrl][Alt][F1-9] in order to go to a system terminal or back to graphics mode with [Ctrl][Alt][F7/F8/F1].

systemctl -t service -a | grep dm # try to find out which display manager you are using systemctl restart kdm.service # whatever you are using in deed: gdm, sddm, etc.

True 4K/UHD/2160p

In order to get a full 3840x2160 display we need to do a little bit more: Improve the speed of data transmission for our cable, the TMDS and output connector. Make sure that you have the right cable (HDMI 2.0 or at least 1.4) and that it allows for the frequency by which we wanna drive the signal through.

nouveau: 3840x2160, 23Hz

The standard TMDS frequency is 165 MHz. It allows for 2560x1600_30.00 or 2560x1440_30.00. If you want higher graphics modes with an Nvidia card try to specify nouveau.hdmimhz=225, 297 or 330 on the kernel command line when you boot via Grub, LiLo or any other boot loader. We recommend to try it with 225 first and then try the higher modes until your screen will refuse to show an image; but not any higher. 330 is currently the highest and said to be only available for dual-link DVI ports. HDMI ports are said to be always single-link. You could specify that additionally by nouveau.duallink=0. However for our tests with kernel 4.5.0-rc6 we could not draw an advantage out of using this additional parameter.

Of greater importance will be the version of the kernel you use. The nouveau.hdmimhz parameter is officially supported for the 4.5.0 kernels but not before.

~> uname -a Linux AmiloXi3650 4.5.0-rc6-ARCH #5 SMP PREEMPT Wed Mar 2 16:13:46 CET 2016 x86_64 GNU/Linux
~/linux-stable> grep MODULE_PARM_DESC drivers/gpu/drm/nouveau/*.[ch] | egrep "hdmi|duallink"
drivers/gpu/drm/nouveau/nouveau_connector.c:MODULE_PARM_DESC(duallink, "Allow dual-link TMDS (default: enabled)"); drivers/gpu/drm/nouveau/nouveau_connector.c:MODULE_PARM_DESC(hdmimhz, "Force a maximum HDMI pixel clock (in MHz)");
~> lspci | grep -i VGA
01:00.0 VGA compatible controller: NVIDIA Corporation G96M [GeForce 9600M GT] (rev a1)
~> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-custom root=/dev/disk/by-label/arch ro resume=/dev/disk/by-label/swap nouveau.hdmimhz=225

uname -a shows the version of ther kernel you have booted with. For viewing all installed kernel versions you may either want to ask the package management system of your distribution or easier ls /lib/modules or /boot. Sources of the current kernel if installed by the package management system do often arrive at /src/linux. If your kernel should not support the nouveau.hdmimhz parameter then head for the chapter on compiling the Linux kernel yourself.

Here is a short listing of what shall work:

Be warned that with release candidate 6 and the Nvidia Geforce G96GLM [Quadro FX 770M] we could currently not achieve anything higher than 3840x2160_23.00 though higher hdmimhz values than 225 were possible. Still this is work in progress and we are looking forward to improvements in newer versions. If you compile from scratch you will likely already have a newer and better nouveau / drm kernel module.

radeon: 3840x2160, 30Hz

At a first glance things may appear somewhat more difficult when it comes to the radeon driver as used for ATI graphics cards. You will have to compile the kernel yourself and apply our kernel patch which introduces the radeon.hdmimhz parameter. It has not yet been assimilated by the mainline kernel. Nonetheless according to our own experience the radeon - patch seemed to be more stable and did a better job in TMDS-(over)clocking compared to what the nouveau driver can currently offer; at least in the setting we could test here.

The patch was our own devlopement. It combines the hdmimhz and duallink parameter into one: radeon.hdmimhz. This means that the duallink feature is currently always turned off as soon as the hdmimhz parameter is nonzero. That should work fine for a radeon.hdmimhz of 225 or 297 (for details about hdmimhz/ duallink and possible parameter values please also read the section about nouveau and UHD).

If you are curious and wanna have a look at the patch then you will see that it is not hard to understand: radeon_dvi_mode_valid simply returns true when the frequency given by the radeon.hdmimhz parameter is not exceeded. If you wanted to experiment with radeon.hdmimhz=330 and a dual-link connector on the other hand then you may need to make the radeon_dig_monitor_is_duallink function avoid returning false at the beginning for exactly this configuration. If you should succeed with such an experiment by respective hardware please report that to us so that we can adjust our patch. Up to now we have been rather cautious with the introduction of a duallink kernel module parameter because one graphics card may power multiple monitors not all of them actually requiring the duallink feature. When deployed inappropriately it will distort or blacken previously working displays.

If you should have purchased a Radeon R5 230 card for your computer simply set radeon.hdmimhz=297 on the kernel command line and everything will work fine. You will not even have to configure the 3840x2160@30 mode manually. It will appear automatically and it will be set automatically. Yes deploying 4K may be as simple as this!

The Linux Kernel

how to compile the Linux kernel

There may already be quite a lot of instructions about this out there. Nonetheless here we will show you in a short what is important:

~> git clone https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git ~/linux-stable> zcat /proc/config.gz >.config ~/linux-stable> make oldconfig evtl.: ~/linux-stable> make menuconfig # wanna change some kernel settings? ~/linux-stable> getconf _NPROCESSORS_ONLN # how many cores does your CPU has (_NPROCESSORS_CONF) or how many are available for use? 4 ~/linux-stable> make --jobs=4 bzImage # compile a bzip2-ed kernel image with 4 threads ~/linux-stable> make install # install the kernel image; works on most distributions or: ~/linux-stable> cp arch/x86_64/boot/bzImage /boot/vmlinuz-custom # install like this f.i. when you have the Arch distribution ~/linux-stable> make --jobs=4 modules # compile kernel modules ~/linux-stable> make modules_install # install the modules under /lib/modules/YOUR-KERNEL-VERSION ~/linux-stable> mkinitcpio -k YOUR-KERNEL-VERSION -g /boot/initramfs-custom # Arch: create initrd, other distros: mkinitrd/ geninitramfs etc.

git is the version control system of the kernel. Other version control systems are svn and cvs. You will use it here to download the kernel sources and apply patches.

The last step comprises generating a so called initrd which contains kernel modules directly needed for startup; f.i. in order to mount your root file system. The command for doing so is distribution dependent and can be found out by sth. like man -k ramfs or man -k initrd. Fortunately most distributions succeed to place a hook for the initramfs generation at make install so that the only thing you need to do is execute this command as the last command in the list above instead of manually invoking an initramfs generation.

If you have a radeon card and need to apply our kernel patch do so right after the git clone/checkout. The first two of the following commands are only necessary to intialize your personal settings if you have never used git on that computer yet:

~> git config --global user.name "Max Mustermann" ~> git config --global user.email "your@email.com" ~/linux-stable> git am ~/Downloads/0001-radeon.hdmimhz-parameter-introduced.patch applying: radeon.hdmimhz parameter introduced * proven to work for a radeon XFX R5 230 card with radeon.hdmimhz=297 3840x2160@30 is offered automatically and can be set successfully with an AOC u2868pqu monitor apparently without heat issues (cooler moderately warm after running half a day) * radeon_encoders.c: radeon_dig_monitor_is_duallink must always return false otherwise screen stays black for the settings described above

Finally you will have to add the kernel which you have just installed to your file system into the boot menu of your boot loader (see also: adjusting the kernel command line). Look for a command like grub-mkconfig or better a distribution specific command which will re-create your /boot/grub/grub.cfg automatically.

how to update an already compiled kernel

Now we will have a look on the patch we have just applied using git am:

~> git show # quit the viewing with 'q' ~> git pull remote: Counting objects: 7094, done. remote: Compressing objects: 100% (7086/7086), done. remote: Total 7094 (delta 4817), reused 3 (delta 0) Empfange Objekte: 100% (7094/7094), 8.53 MiB | 910.00 KiB/s, Fertig. Löse Unterschiede auf: 100% (4817/4817), Fertig. ~> git merge origin/master # simply quit editing comments for this merge with ':wq' ~> git log ~> gitk &

At first we are solely viewing the radeon-hdmimhz patch we have just applied. Then we update our local copy of the git repository. Nonetheless for our local checkout (i.e. the working copy that you can see) nothing has actually changed. To get the newly downloaded changes into our own 'branch' which does at this point solely contain the radeon-hdmimhz patch execute a git merge origin/master. 'origin/master' denominates the master changes of the remote branch we have just downloaded by git pull. Finally we may like to view what git has done now. Do so by executing a git log or if you wanna see the kernel git tree graphically then install and invoke gitk.

adjusting the kernel command line

At first simply try out the nouveau.hdmimhz=XXX or radeon.hdmimhz=XXX kernel parameters by pressing 'e' when being over the respective boot loader entry at boot time with Grub. Before you reboot you can see it by a cat /proc/cmdline where to add these parameters. Try an lspci | grep -i VGA to determine your graphics card and see whether you have an ATI (radeon) or Nvidia (nouveau) card so that you do not always need to specify both parameters. There is currently no hdmimhz parameter for Intel cards. On boot CDs it may be necessary to enter [ESC]menu[RETURN] in order to get to the menu with boot loader entries where you can press 'e'. All such changes at boot time will be temporary; i.e. you would need to enter them on every boot.

In order to persistently add a kernel parameter to your bootup settings you will need to have a look at the boot loader configuration of your distribution. Most distributions either come with a graphical control center where you can modify the behaviour of Grub or they will ship crucial configuration files under /etc/defaults or /etc/syscfg or the like. Look for a file called 'grub' or 'bootloader' there.

For the case that you are using Arch Linux you will already have configured your boot loader at install time manually. Nonetheless we wanna repeat here for completeness reasons on how that is usually done:

~> mcedit /boot/grub/menu.lst ~> grub-menulst2cfg /boot/grub/menu.lst /boot/grub/grub.cfg

Note that grub.cfg is an auto-generated file and will usually be overwritten by your distribution every time it installs the boot loader anew.


Additional Materials

newmode xorg.conf.MacBook3,1
modehack xorg.conf.intel-VGA
arachnoid modeline calculator xorg.conf.nouveau-225
kernel patch for radeon.hdmimhz xorg.conf.radeon-297


4K/UHD and your Desktop Environment

Icons and text may likely be too small for your taste with current desktop environments in the default configuration. The basic way to ensure correct viewing would be to measure the size of your monitor with a measuring tape and state the correct value in your xorg.conf. However we wanna save you from doing so. It is usually more effective to change the dpi-value for your font and the icon sizes under the systemsettings or systemsettings5 in KDE (Common Appearance and Behaviour / Application Appearance), the xfce4-settings-manager or the respective application for Gnome.

Contact

Elmar Stellnberger:
estellnb@elstel.org

If you should have comments, questions, etc. write me, the author of this page an email.


*** other interesting content from elstel ***