The Thri­ving Hunt for 4K/UHD/2160p/HiDPI

Recent improvements of the Linux kernel make it possible: Enjoying a full 4K/UHD/2160p resolution of 3840x2160 pixels or more even with elder hardware. Seen from the viewpoint of your CPU performance a Core 2 Duo or Core 2 Quad system will be fully sufficient to handle such a resolution at least for viewing photos and for normal office use. Many graphics cards which have never been advertised to feature 4K/UHD can be made to display such modes by overclocking your TMDS (Transition Minimized Differential Signaling - i.e. the clock frequency of your HDMI or DVI output).

UHD monitor

Even a little, middle aged Intel Atom notebook can be made display 2560x1440 over its VGA connector - all of it without overclocking or any special tricks that would require one of the newer kernels.

Reading this article you will enjoye higher graphics modes under Linux, soon. - and you will never wish the old 1080p called 'Full HD' back. In our opinion higher graphics modes are simply a pleasure - not just because of the smooth shape of letter and the pin-sharp display of photos. You will soon realize that there is much more text that fits on your monitor - an important issue, especially for people who program, work with large texts or who simply want to get an overview of their email inbox.

Get the Right Hardware

The first thing you will need is a 4K/UHD capable monitor like our AOC u2868pqu here. This monitor was available new for no more than 398.50€ at the time of purchase (2015-11-06) or for about 300€ used when writing this article (2016-03-07). It allows for a maximum frequency of 30Hz in 3840x2160 mode when powered over the HDMI and DVI port while it would be possible to achieve 60Hz over the DisplayPort. Respective adapters that convert a HDMI 2.0 signal into a DisplayPort signal were hardly available at the beginning of 2016 (most adapters convert into the other direction; the #HD2DP from startech.com is an exception.).

For optimal smoothness of moving parts like the mouse pointer 60Hz would be recommended though we can use a trick to make things as smooth as with 60Hz when we just have 30Hz available: interlaced graphics modes where at first all even lines are refreshed, then all odd lines and vice versa. This even though 25 frames or pictures per second are said to be enough for smooth motions because the frame rate of the program decoding videos will not match the frame rate of the monitor exactly. A newer monitor of the same series, the AOC u3277pqu would support 60Hz over HDMI 2.0. Nonetheless HDMI 2.0 is something that is only available with the newest graphics cards so that we will most likely have to suffice with a clock rate of 30Hz for elder hardware anyway (or effective 60 Hz with interlaced modes). Unfortunately current versions of the nouveau driver exhibit problems with the mouse pointer when using interlaced modes. Usually 30 Hz are sufficient also for videos.

Concerning the price of the hardware the better Core 2 notebooks cost at least 200€ like the Fujitsu Siemens Xi 3650 with an NVIDIA Corporation G96M [GeForce 9600M GT] graphics card featuring 3840x2160 with a TMDS frequency of 225Mhz yielding a screen refresh rate of at most 24Hz. This is somewhat below 30Hz, not a default refresh rate and therefore a frequency which is not supported by all 4K monitors. However the AOC u2868pqu supports like most PC-displays many non-standard modes as we will see shortly which is generally not the case for TV-screens. The Fujitsu Siemens Xi 3650 can additionally offer flashrom support under Linux.

Those of you who prefer a business notebook like the Fujitsu Siemens Celsius H265/H270 with an Nvidia Geforce G96GLM [Quadro FX 770M] will have an equipment that guarantees 3840x2160@30. It has many special features like an optional 3G/UMTS module, a chip card reader and many BIOS settings like to switch off the TPM (Trusted Platform Module). Unfortunately we could not reach any higher refresh rate than 24Hz with kernel 4.5.0 and the Celsius. Nonetheless we suppose that this could change rapidly in the future with newer versions of the nouveau driver. If you need guaranteed 30 or 60 Hz under Linux you should buy a notebook with a newer graphics card. Simply use a DVI to HDMI converter also under Windows for 30 Hz. Both notebooks have eSATAp, an SDHC card reader and an ExpressCard slot which can be used for support of USB3.0.

Core 2 and newer desktop computers are of course cheaper than notebooks. They can be made 4K/UHD-ready very easily by the right graphics card. As far as we have tested it many UHD/4K-capable graphics cards cause problems with suspend, s2ram or s2disk (hibernation) at least when used with various Core 2 Fujitsu hardware. A card that worked for us in any setting yielding 3840x2160@30Hz was the Radeon R5 230 Core 2GB DDR3 PCIe as available by XFX or other manufacturers. Unfortunately other cards caused s2ram problems: the Radeon R7 240 Core 2GB DDR3 PCIe, the Nvidia GeForce GT 720 GDDR5 or the GeForce 730. The GDDR5 card did additionally have a heat problem. All of these cards are cooled passively; - that means no additional noise.

Very pleasurable about the R5 230 card is its flat cooling grid that allows for single slot height and thus full usage of all other PCIe slots on your board. It has a VGA, a DVI and a HDMI output and can also be installed on desktops requiring half height, SFF (small form factor) PCIe cards. For most of the Core 2 hardware you may very likely want to have an USB3.0 PCIe card as well if you would not suffice with an eSATAp slot bracket. The Silverstone SST-EC04-P (also SFF compatible) did work well for us. One of its features is an internal USB3.0 port which is very practical if you want an SDHC/SDXC and CompactFlash card reader as available in the size of a floppy drive. That way you will be able to access your photos very quickly.

One last say about the hardware: Check your HDMI cables twice; elder cables may not allow for the transmission of 4K modes at all. We have been using HDMI 2.0 cables in all settings though basically any cable that supports HDMI 1.3 or 1.4 should have sufficed as well. Also with VGA cables a good cable may be necessary in support of modes higher than Full HD.

User Defined Graphics Modes

Better Modes over HDMI/DVI: using the MacBook 3,1

The first computer we will have a look at is an old MacBook 3,1 refurbished to use Debian stable Linux. Though it shall at least support Full HD for our external monitor this mode is not provided and selected automatically. Though such an issue is rather seldom these days we will show you how to make it run in Full HD and how to power an external monitor with 2560x1600_30 using this noteboook.

Look at the following command line:

~> xrandr Screen 0: minimum 320 x 200, current 1024 x 768, maximum 8192 x 8192 LVDS1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 286mm x 178mm 1280x800 59.91 + 1024x768 60.00* 800x600 60.32 56.25 640x480 59.94 DVI1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 621mm x 341mm 1680x1050 59.88 1280x1024 75.02 60.02 1440x900 59.90 1280x960 60.00 1280x720 59.97 1024x768 75.08 70.07 60.00* 832x624 74.55 800x600 72.19 75.00 60.32 56.25 640x480 75.00 72.81 66.67 60.00 720x400 70.08 ~> cvt --reduced 1920 1080 60 # 1920x1080 59.93 Hz (CVT 2.07M9-R) hsync: 66.59 kHz; pclk: 138.50 MHz Modeline "1920x1080R" 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync ~> xrandr --newmode "1920x1080R" 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync ~> xrandr --addmode DVI1 1920x1080R ~> xrandr --output DVI1 --mode 1920x1080R --right-of LVDS1 ~> xrandr DVI1 connected 1920x1080+1024+0 (normal left inverted right x axis y axis) 621mm x 341mm … 1920x1080R 59.93* … ~> umc 1920 1080 60Hz # 1920x1080x60.00 @ 67.260kHz Modeline "1920x1080x60.00" 173.261760 1920 2040 2248 2576 1080 1084 1088 1121 -HSync +VSync

At first we query all screen settings as they are in effect just now. Then we let cvt produce a monitor modeline which you can either include in your xorg.conf or use on the commmand line directly by defining a new mode with xrandr and adding that mode as available for the DVI output. Finally you will need to activate this new mode to test whether it would work in practice. The --reduced parameter reduces blanking i.e. reduces the invisble afterrun of the 'ray' when going to the next line so that a little bit a higher vertical frequency shall be achievable by the same hardware. Generally it is better to use gtf instead of cvt for newer TFTs though the difference in the produced modelines is according to our experience very little. The umc tool is another alternative if cvt and gtf should not yield working results (sourceforge site, direct download of sources for v0.2). Under certain circumstances you may get even better results with the arachnoid modeline calculator. Besides this it additionally allows for interlaced modes featuring an effective frequency twice as high as without.

~> ./newmode --gtf DVI1 2560 1600 30 --output DVI1: 2560x1600 mode "2560x1600_30.00": 164.10 2560 2688 2960 3360 1600 1601 1604 1628 -HSync +Vsync xrandr --output DVI1 --mode 2560x1600@30 ~> xrandr --output DVI1 --mode 1920@1080R --right-of LVDS1 ~> xrandr --output DVI1 --mode 2560x1600@30 --right-of LVDS1

Basically it is just the same procedure again for 2560x1440 and 2560x1600. In order to make trying out new resolutions simpler and easier for you we have provided the newmode script for you which combines all three necessary steps from above into one when defining a new mode. You could specify a couple of other frequencies separated by space after the 30 as well. This can be useful in order to try out the highest achievable frequency. Before setting a new mode make sure that you can reset your screen to the old mode in some way. Do so by first setting the actual and working graphics mode again which is actually a NOP (no-operation). Then if something goes wrong press two times the up-key to recall the history and then return.

As you do not want to repeat these steps every time you boot see for the ready-prepared xorg.conf.MacBook3,1 which you will need to place as /etc/X11/xorg.conf or see for the newmode script at the bottom.

Better Modes over VGA: using a PB Dots E2 Atombook

Achieving higher graphics modes over the VGA connector is a bit more tricky as it may be necessary to assemble your own modeline. You may read the description at archnoid (link is at the table below) on how to do this in detail. A modeline basically has the following form:

ModeLine ModeName DotClock ScreenX HSyncStart HSyncEnd HTotal ScreenY VSyncStart VSyncEnd vTotal

ModeName is the name of the new graphics mode as you will type it for xrandr or see it in your favourite monitor configuration tool. ScreenX and ScreenY will make up the visible resolution on the screen. ScreenX < HSyncStart < HSyncEnd < HTotal are monotonically increasing values as then used for the horizontal synchronization of the cathod ray of your tube when it reverts to go to the next line. Modern LCD/TFT monitors are still signalled in a similar way. A similar relationship exists between the vertical ray synchronization components. The smaller the 'synchronization area' you specify the lower can be the resulting frequency pixels are drawn with (also: dot-clock) making space for better refresh rates (measurable in fps):

DotClock = RefreshRate in Hz * HTotal * VTotal / 1,000,000

… whereby the RefreshRate in Hz is similar to the number of frames finally delivered per second, as we have already been talking about it; i.e. 30Hz, 60Hz etc.

Now if your image appears too far at the right try to add multiples of eight to HSyncStart and HSyncEnd. Proceed similarely with VSynStart and VSyncEnd adding smaller increments if the image is too far at the bottom; - or subtract to move the picture right into the other direction.

An image stretched too widely requires adding small values to HTotal. Nonetheless for the higher modes you may at first need a 'good guess' in order to obtain an image at all. In order to facilitate your guessing we have prepared a few bash shell procedures which will absolve you from the task of repeatedly having to calculate the dot-clock frequency:

~> cat modehack delmod() { xrandr --delmode VGA-1 "$1"; xrandr --rmmode "$1"; } addmod() { xrandr --newmode "$@"; xrandr --addmode VGA-1 "$1"; } chmod() { delmod "$1"; addmod "$@"; } newmod2() { xrandr --newmode "$1" $(bc <<<"scale=2; (30*$6*${10}+5000)/1000000") $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13}; } newmod() { echo xrandr --newmode "$1" $(bc <<<"scale=2; ($2*$6*${10}+5000)/1000000") $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13}; xrandr --newmode "$1" $(bc <<<"scale=2; ($2*$6*${10}+5000)/1000000") $3 $4 $5 $6 $7 $8 $9 ${10} ${11} ${12} ${13}; } addmod() { newmod "$@"; xrandr --addmode VGA-1 "$1"; } ~> source modehack ~> addmod 2560x1440 30 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync xrandr --newmode 2560x1440 122.00 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync ~> chmod 2560x1440 50 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync xrandr --newmode 2560x1440 203.33 2560 2568 2592 2744 1440 1441 1444 1482 -HSync +Vsync ~> xrandr --output VGA-1 --mode 2560x1440

At first change the 'VGA-1' to the identifier of the output channel for your monitor inside the modehack script. Then load it with the source command. Finally add a new mode with addmod and later on change a mode with the same name as before by the chmod command. Not only can you try out the mode now with xrandr but also can you directly acquire the output line of addmod/chmod into your xorg.conf. We have prepared a file with the latter configuration you can see above which is known to work for the u2868pqu monitor into xorg.conf.intel-VGA (to be renamed as /etc/X11/xorg.conf). The new settings in xorg.conf will be honored as soon as you reboot or as soon as you restart your display manager. Quit all graphical programs before doing so and then log out of your display manager. From there press [Ctrl][Alt][F1-9] in order to go to a system terminal or back to graphics mode with [Ctrl][Alt][F7/F8/F1].

systemctl -t service -a | grep dm # try to find out which display manager you are using systemctl restart kdm.service # whatever you are using in deed: gdm, sddm, etc.

True 4K/UHD/2160p

In order to get a full 3840x2160 display we need to do a little bit more: Improve the speed of data transmission through our cable. The data transmission through the cable is encoded via TMDS (Transition Minimized Differential Signaling) which should avoid transmission errors and allow for higher transmission rates. It may be necessary to adjust the TMDS speed by hand in order to allow higher resolutions like UHD. You do not need to be afraid of overclocking your TMDS as it could only damage your graphics card under the most unreasonable circumstances. The GeForce 9600M GT which we have tested has allowed for a TMDS frequency of 225 MHz over 24 hours of continuous operation without exposing any heat problems.

It may also be necessary or useful to adjust the TMDS speed with ATI cards which use the radeon display driver. This is only possible by our additional non-mainline kernel patch. The patch is necessary for the Radeon R5 230 which we recommend for the sake of compatibility and space usage within your case. This card is working very well entirely stable and without any kind of heat problems with the patch we offer here. A manual configuration of your graphics card like we have shown it in the previous section is not necessary as the TMDS of 297 MHz is high enough for a default graphics mode. The Radeon R5 230 has originally been sold via NeXus Mobile as 4K-ready over HDMI which is not officially acknowledged by ATI.

Make sure that you have the right cable (HDMI 2.0 or at least 1.3) and that it allows for the frequency by which we wanna drive the signal through.

TMDS- Frequency and Graphics Mode

Here you have a list which TMDS-frequency and graphics mode combinations should generally be possible:

hdmimhz=165: 2560x1600 or 2560x1440 with 30 Hz (HDMI 1.0)
hdmimhz=225: 3840x2160, 23 Hz or perhaps 24 Hz
hdmimhz=297: 3840x2160, 30 Hz (not entirely accurate calculated: 23.00 / 225 * 297)
hdmimhz=330: 4096x2160, 30 Hz (Dual-Link Feature, HDMI 1.3)

The TMDS-pixelclock required for a certain graphics mode can be calculated as follows: 3840 * 2160 * 30Hz / 1.000.000 = 248.832 MHz which is already covered by 297. In deed it will require a higher frequency than you calculate like this because of the blanking of your screen (i.e. where the ray goes to the next line or upwards to the upper left corner).

The respective kernel parameters as you can enter them on the Grub command shell are called nouveau.hdmimhz and radeon.hdmimhz whereby radeon.hdmimhz requires the kernel patch as mentioned before in addition to kernel version 4.5 or higher. Concerning Mageia I had relied on an information that the patch would have been assimilated into the Mageia kernel. As it now turned out this appears to have never been the case in deed. For Intel graphics cards there is currently no facility to adjust the TMDS frequency.

Whenever you wanna obtain a higher TMDS frequency you specify kernelmodule.hdmimhz=225, 297 or 330 on the kernel command line when booting via Grub. We recommend you to start with 225 and then try out higher frequencies step by step until your monitor will stay dark refusing to display anything; from there on you would not necessarily try a higher mode. According to the developers 330 is the highest frequency and only available via Dual-Link DVI cables (2*165 MHz = 330 MHz). According to the same source HDMI cables are always 'single link' but allow for 600 MHz in the case of HDMI 2.0.

nouveau: 3840x2160, 23Hz

Nvidia can guarantee a minimum supported TMDS frequency based on the chipset of your graphics card although we have noticed that the GeForce 9600M GT (Tesla) does support a higher TMDS frequency than in the table below while maintaining stable operation.

Lately another tester had reported us at the #nouveau IRC channel that he could use two displays with 3840x2160@60 Hz and one display with 3840x2160@30 Hz with a Fermi based card at the same time. Testing our GeForce 9600M GT and Quadro FX 770M graphics cards on Core 2 systems we could reach no more than 3840x2160 with 23 Hz although the Quadro FX allows for higher hdmimhz rates than 225. If you should have the second card just test it with the newest available kernel version again; it should basically support 3840x2160@30 Hz. You may specify nouveau.duallink=0 especially when using HDMI in order to switch the Dual-Link feature off. Nonetheless we could not draw any advantage of using this parameter with kernel 4.5.0.

Of greater importance will be the version of the kernel you use. The nouveau.hdmimhz parameter is officially supported for the 4.5.0 kernels but not before.

~> uname -a Linux AmiloXi3650 4.5.0-rc6-ARCH #5 SMP PREEMPT Wed Mar 2 16:13:46 CET 2016 x86_64 GNU/Linux
~/linux-stable> grep MODULE_PARM_DESC drivers/gpu/drm/nouveau/*.[ch] | egrep "hdmi|duallink"
drivers/gpu/drm/nouveau/nouveau_connector.c:MODULE_PARM_DESC(duallink, "Allow dual-link TMDS (default: enabled)"); drivers/gpu/drm/nouveau/nouveau_connector.c:MODULE_PARM_DESC(hdmimhz, "Force a maximum HDMI pixel clock (in MHz)");
~> lspci | grep -i VGA
01:00.0 VGA compatible controller: NVIDIA Corporation G96M [GeForce 9600M GT] (rev a1)
~> cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-custom root=/dev/disk/by-label/arch ro resume=/dev/disk/by-label/swap nouveau.hdmimhz=225

uname -a shows the version of ther kernel you have booted with. For viewing all installed kernel versions you may either want to ask the package management system of your distribution or easier ls /lib/modules or /boot. Sources of the current kernel if installed by the package management system do often arrive at /src/linux. If your kernel should not support the nouveau.hdmimhz parameter then head for the chapter on compiling the Linux kernel yourself.

radeon: 3840x2160, 30Hz

At a first glance things may appear somewhat more difficult when it comes to the radeon driver as used for ATI graphics cards. You will have to compile the kernel yourself and apply our kernel patch which introduces the radeon.hdmimhz parameter. It has not yet been assimilated by the mainline kernel. Nonetheless according to our own experience the radeon - patch seemed to be more stable and did a better job in TMDS-(over)clocking compared to what the nouveau driver can currently offer; at least in the setting we could test here.

The patch was our own devlopement. It combines the hdmimhz and duallink parameter into one: radeon.hdmimhz. This means that the duallink feature is currently always turned off as soon as the hdmimhz parameter is nonzero. That should work fine for a radeon.hdmimhz of 225 or 297.

If you are curious and wanna have a look at the patch then you will see that it is not hard to understand: radeon_dvi_mode_valid simply returns true when the frequency given by the radeon.hdmimhz parameter is not exceeded. If you wanted to experiment with radeon.hdmimhz=330 and a dual-link connector on the other hand then you may need to make the radeon_dig_monitor_is_duallink function avoid returning false at the beginning for exactly this configuration (see for the radeon.duallink patch). If you should succeed with such an experiment by respective hardware please report that to us. Up to now we have been rather cautious with the introduction of a duallink kernel module parameter because one graphics card may power multiple monitors not all of them actually requiring the duallink feature. When deployed inappropriately it will distort or blacken previously working displays.

If you should have purchased a Radeon R5 230 card for your computer simply set radeon.hdmimhz=297 on the kernel command line and everything will work fine. You will not even have to configure the 3840x2160@30 mode manually. It will appear automatically and it will be set automatically. Yes deploying 4K may be as simple as this! If you should succeed to overclock another card with 600 MHz for HDMI 2.0 please do also write us an email.

The Linux Kernel

how to compile the Linux kernel

There may already be quite a lot of instructions about this out there. Nonetheless here we will show you in a short what is important:

~> git clone https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git ~/linux-stable> zcat /proc/config.gz >.config # or copy the config from /boot ~/linux-stable> make oldconfig evtl.: ~/linux-stable> make menuconfig # wanna change some kernel settings? ~/linux-stable> getconf _NPROCESSORS_ONLN # how many cores does your CPU has (_NPROCESSORS_CONF) or how many are available for use? 4 ~/linux-stable> make --jobs=4 bzImage # compile a bzip2-ed kernel image with 4 parallel threads ~/linux-stable> make install # install the kernel image; works on most distributions or: ~/linux-stable> cp arch/x86_64/boot/bzImage /boot/vmlinuz-custom # install like this f.i. when you have the Arch distribution ~/linux-stable> make --jobs=4 modules # compile kernel modules ~/linux-stable> make modules_install # install the modules under /lib/modules/YOUR-KERNEL-VERSION ~/linux-stable> mkinitcpio -k YOUR-KERNEL-VERSION -g /boot/initramfs-custom # Arch: create initrd, other distros: mkinitrd/ geninitramfs etc.

git is the version control system of the kernel. Other version control systems are svn and cvs. You will use it here to download the kernel sources and apply patches.

It is possible that the step zcat /proc/config.gz >.config does not work because the required procfs-functionality has been locked at you. For this case find out your currently running kernel version via uname -r and then search under /boot for a respective config- file. It is very likely that make bzImage still asks you a host of questions about the kernel configuration especially when the new and old kernel version do not match. Do not panic and simply press return to get a standard configuration.

The last step comprises generating a so called initrd which contains kernel modules directly needed for startup; f.i. in order to mount your root file system. The command for doing so is distribution dependent and can be found out by sth. like man -k ramfs or man -k initrd. Fortunately most distributions succeed to place a hook for the initramfs generation at make install so that the only thing you need to do is execute this command as the last command in the list above instead of manually invoking an initramfs generation.

Under Debian translating your kernel can be as simple as this:

~> git clone https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git ~/linux-stable> cp /boot/config-VERSION >.config ~/linux-stable> make oldconfig ~/linux-stable> make --jobs=4 deb-pkg # translates everything and builds Debian packages with 4 parallel threads > dpkg -i linux-headers-VERSION.deb # install the Debian packages > dpkg -i linux-image-VERSION.deb

If you have a radeon card and need to apply our kernel patch do so right after the git clone/checkout. The first two of the following commands are only necessary to intialize your personal settings if you have never used git on that computer yet:

~> git config --global user.name "Max Mustermann" ~> git config --global user.email "your@email.com" ~/linux-stable> git am ~/Downloads/0001-radeon.hdmimhz-parameter-introduced.patch applying: radeon.hdmimhz parameter introduced
* proven to work for a radeon XFX R5 230 card with radeon.hdmimhz=297 3840x2160@30 is offered automatically and can be set successfully with an AOC u2868pqu monitor apparently without heat issues (cooler moderately warm after running half a day)
* radeon_encoders.c: radeon_dig_monitor_is_duallink must always return false otherwise screen stays black for the settings described above

Finally you will have to add the kernel which you have just installed to your file system into the boot menu of your boot loader (see also: adjusting the kernel command line). Look for a command like grub-mkconfig or better a distribution specific command which will re-create your /boot/grub/grub.cfg automatically. Some distributions invoke grub-mkconfig automatically on make install or dpkg -i so that no additional steps will be required.

The commands from very above have been written for Arch-Linux. They do a lot manually of what can be achieved by a simple make install under distributions like Debian: cp arch/x86_64/boot/bzImage /boot/vmlinuz-custom, mkinitcpio, grub-mkconfig. For the automatic install it is important that you call make install after make modules_install.

how to update an already compiled kernel

Now we will have a look on the patch we have just applied using git am:

~> git show # quit the viewing with 'q' ~> git pull remote: Counting objects: 7094, done. remote: Compressing objects: 100% (7086/7086), done. remote: Total 7094 (delta 4817), reused 3 (delta 0) Empfange Objekte: 100% (7094/7094), 8.53 MiB | 910.00 KiB/s, Fertig. Löse Unterschiede auf: 100% (4817/4817), Fertig. ~> git merge origin/master # simply quit editing comments for this merge with ':wq' ~> git log ~> gitk &

At first we are solely viewing the radeon-hdmimhz patch we have just applied. Then we update our local copy of the git repository. Nonetheless for our local checkout (i.e. the working copy that you can see) nothing has actually changed. To get the newly downloaded changes into our own 'branch' which does at this point solely contain the radeon-hdmimhz patch execute a git merge origin/master. 'origin/master' denominates the master changes of the remote branch we have just downloaded by git pull. Finally we may like to view what git has done now. Do so by executing a git log or if you wanna see the kernel git tree graphically then install and invoke gitk.

If you keep translating the kernel with several patches but once want to revert to a clean state the following commands may help:

> git checkout origin/master # revert to the remote master branch > git pull # that updates the git repository - you may also want to execute this command solely if you wanna keep existing patches > git checkout origin/master > rm -fr .git/rebase-apply # take care this command removes a whole directory including its content; here it only deletes your local changes to the git repository > git am xy.patch # apply the patches which you want to have

adjusting the kernel command line

At first simply try out the nouveau.hdmimhz=XXX or radeon.hdmimhz=XXX kernel parameters by pressing 'e' when being over the respective boot loader entry at boot time with Grub. Before you reboot you can see it by a cat /proc/cmdline where to add these parameters. Try an lspci | grep -i VGA to determine your graphics card and see whether you have an ATI (radeon) or Nvidia (nouveau) card so that you do not always need to specify both parameters. There is currently no hdmimhz parameter for Intel cards. On boot CDs it may be necessary to enter [ESC]menu[RETURN] in order to get to the menu with boot loader entries where you can press 'e'. All such changes at boot time will be temporary; i.e. you would need to enter them on every boot.

In order to persistently add a kernel parameter to your bootup settings you will need to have a look at the boot loader configuration of your distribution. Most distributions either come with a graphical control center where you can modify the behaviour of Grub or they will ship crucial configuration files under /etc/default or /etc/syscfg or the like. Look for a file called 'grub' or 'bootloader' there.

For the case that you are using Arch Linux you will already have configured your boot loader at install time manually. Nonetheless we wanna repeat here for completeness reasons on how that is usually done:

~> mcedit /boot/grub/menu.lst ~> grub-menulst2cfg /boot/grub/menu.lst /boot/grub/grub.cfg

Note that grub.cfg is an auto-generated file and will usually be overwritten by your distribution every time it installs the boot loader anew.

With almost all distributions you have the following program to automatically create a grub.cfg with boot menu entries for all available kernels. It shall also automatically generate initrds:

> update-grub[2] # update-grub oder update-grub2

faster compilation: compile only custom modules

Translating the whole kernel including all of its modules can be a lengthy issue which soon becomes annoying if you want to test for a regression or bugfix on different versions of the kernel. This is especially true when git bisect-ing a kernel to find out about the last good/working version of a kernel when there is an issue. Here is on how to only compile all currently loaded modules. You may even want to strip off further unneeded modules. Display with modinfo what a module is for. In order to do the compilation on a fast machine or a silent dekstop machine you may want to create a list of loaded modules which you ship to another computer. Don´t forget to also copy the kernel configuration. If you are using the same base configuration on multiple machines (same distribution, same architecture) you may also want to cat their module-lists together and eliminate duplicates with sort -u.

host1> lsmod >modules.lis host1> cp modules.lis /media/usbstick // or: scp modules.lis host2:/usr/src/ host2> make LSMOD=../modules.lis localmodconfig





4K/UHD and your Desktop Environment

Icons and text may likely be too small for your taste with current desktop environments in the default configuration. The basic way to ensure correct viewing would be to measure the size of your monitor with a measuring tape and state the correct value in your xorg.conf. However we wanna save you from doing so. It is usually more effective to change the dpi-value for your font and the icon sizes under the systemsettings or systemsettings5 in KDE (Common Appearance and Behaviour / Application Appearance).

Under the newest version of KDE, KDE Plasma 5 there is also a global setting und systemsettings: „hardware ➙ hardware ➙ display and monitor ➙ display ➙ scale the display”

With the xfce4-settings-manager we could only set the DPI size of the fonts via „appearance ➙ fonts ➙ DPI”, not the icon sizes. Modern desktop environments like the Unity Desktop under Ubuntu have a common continuosly variable scaling factor for the whole GUI: „display devices ➙ scaling factor for the menu and title bar”; on the other hand „scale all window content accordingly” is only relevant when using multiple monitors; either scale by the size of the smaller or the larger monitor. There is a third setting under „presentation ➙ appearance ➙ size of the starter symbols”

Gnome has a scaling factor of one or two by default. However two may already be too large for certain monitors. The following command allows for a continuously variable scaling factor as long as you are using Wayland: gsettings set org.gnome.mutter experimental-features "['scale-monitor-framebuffer']". Find out whether Gnome runs under Wayland by env | grep -i wayland. If the output stays empty you are running the elder Xorg. Finally the necessary setting can be found under „devices ➙ monitor/screen ➙ scaling”.

Cinnamon has the setting „settings ➙ general ➙ scaling” which allows the values 1 and 2 without any intermediate steps. Finally you can have a similar setting with the dconf editor of the Mate Desktop: „org ➙ mate ➙ desktop ➙ interface ➙ window-scaling-factor”. Allowed values are again 1 and 2. The font size can be adjusted under „system ➙ settings ➙ presentation ➙ appearance ➙ fonts”.

Other dekstop environments like IceWM or Enlightment often do not have a font dpi or icon size settings which means that the window manager will not scale or adjust your window title´s sizes. Nonetheless you may still want Qt and Gtk apps like Firefox to deploy the right font size. The following commands achieve this for QT apps, most GTK apps (Firefox), for Gnome 3 apps and finally for Gnome 2 apps:

xrandr --output DVI-D-1 --primary --dpi 128 export QT_SCALE_FACTOR=1.33 export GDK_DPI_SCALE=1.33 gsettings set org.gnome.desktop.interface text-scaling-factor 1.33 gconftool-2 --type float --set /desktop/gnome/font_rendering/dpi 128

Basically the right way would be to just use xrandr for this but most applications do not honor this setting, at least not directly. A scaling factor of 1.33 equals a font dpi of 128 = 96 * 1.33. You can use the Xclient-settings script from above to achieve these tasks. Use xrandr | grep connected to get the right interface name which will most likely not be DVI-D-1 at you.

However just executing these commands after your desktop environment has started is insufficient since environment variables are inherited on program start. If you start a console client, set the environment variables like above (QT_SCALE_FACTOR and GDK_DPI_SCALE) and then quit that konsole session these settings are lost. Place Xclient-settings into /etc/X11/ and insert the following line at the beginning of your /etc/X11/Xsession:

source /etc/X11/Xclient-settings

In order not having to edit the /etc/X11/Xsession on each system installation or after a respective update you may use the following commands to create and apply a patch:

cp /etc/X11/Xsession /etc/X11/Xsession.orig
# modify /etc/X11/Xsession diff -c /etc/X11/Xsession.patch.orig /etc/X11/Xsession.patch >~/Xsession.patch # create context diff sed -i s#/etc/X11/Xsession.orig#/etc/X11/Xsession# ~/Xsession.patch # adjust for file name in the patch header cd / patch --forward -p0 -ci ~/Xsession.patch # --forward ~ do not reverse the application of the patch, -p0 ~ use path names unmodified, -c ~ context diff, i ~ followed by patch file