Dark's Code Dump

Possibly useful

Collectd disk plugin produces no output

I spent a while scratching my head over the collectd ‘disk’ plugin producing no output whatsoever on a Debian 9 server. No log output (other than successful load), no rrd files, nothing.

Frustratingly, debug logging in collectd is compiled out by default, and compiling collectd from debian source packages requires an insane number of dependencies, so I avoided that route.

Instead through analysing the source and history of the disk plugin I came across this commit. The issue turned out to be simple – in the backported kernel I was using, /proc/diskstats has a tonne more fields than the default 4.9 kernel, and this was causing the disk plugin to fall over silently.

It’s a ballache to do anything about this situation on Debian 9, assuming the backported kernel is essential. If you hit it then best just wait for Buster.

Tvheadend smooth output for 60hz display

Tvheadend 4.3 adds support for custom ffmpeg parameters via the ‘spawn’ stream profile. The following command line facilitates smooth output from interlaced 25/50hz Freeview to 60hz, without introducing the soap opera effect (only simple linear blending is used).

/usr/bin/ffmpeg -i pipe:0 -vf "yadif=1, minterpolate='mi_mode=blend:fps=60'" -bitrate 3000k -bufsize 3000k -c:v libx264 -preset veryfast -c:a aac -c:s copy -f mpegts pipe:1

This uses a disgusting amount of CPU time, but given there are no alternative ways of producing a watchable output on many devices, it’s well worth it. For example there is no alternative on Android, Kodi, etc. You can either have linear blending or questionable deinterlacing, but not both, and certainly not as nice deinterlacing as yadif=1.

Brief review of Dell PowerEdge T110 II

  • Well designed + built
  • Quiet even at high load, more so than most desktop PCs
  • BIOS is absolute garbage:
    • No PCI-express 1.0(a) support, which prevents the use of many old low-speed PCIe cards such as NICs and TV tuners
      • Dell couldn’t give a toss about this, because hurr durr it’s a server who cares about standards if we can line our pockets from makers of PCIe NICs and raid cards via a little something we call Planned Obsolescence(tm)
    • Power saving features such as PCI ASPM are missing
    • Can’t disable onboard GPU in spite of a perfectly functional serial port and SoL implementation, unless a discrete GPU is present, but surprise surprise 99% of GPUs don’t work due to PCIe and power limitations.
    • Cannot boot from USB (or at all?) without at least one SATA device present
    • It’s made by Insyde, need I say more?

Garbage text from Serial-over-LAN on Dell server

Hit an issue on a PowerEdge T110 II where SOL was completely unusable with corrupted/garbage text.

Changing the terminal mode from VT100 to ANSI in the BIOS settings fixed it. (Even though I was using fully VT100 compatible terminals ¯\_(ツ)_/¯ )

Can’t boot from USB on Dell PowerEdge T110 II

Found myself stuck and couldn’t find any options to boot from USB (no BIOS settings and boot manager was completely blank. Here’s what fixed it:

Plug in at least one SATA disk (yes this is a ridiculous limitation) then go to Boot Manager (F11) and press enter on your SATA disk.

A menu then appears giving the option to select the USB drive.

How to remove all Javascript from WordPress

Why do modern web developers insist on hundreds of kb of Javascript when it achieves literally nothing at all? Here’s how you can make WordPress load several times faster, and without breaking any functionality, assuming your use case is a simple blog + comments.

Add the following to somewhere in your theme’s shared functions file, or similar:

function remove_all_scripts()
{
    if (!is_user_logged_in())
    {
        foreach (wp_scripts()->registered as &$script)
        {
            wp_dequeue_script($script->handle);
            wp_deregister_script($script->handle);
        }
    }
}

add_action('wp_enqueue_scripts', 'remove_all_scripts', PHP_INT_MAX);

The ‘is logged in’ check facilitates the wp-admin panel among other things.

Windows Dynamic Disk vs Storage Spaces vs Intel RST for RAID 1

Disks are 2x Western Digital Red 4TB. All configurations are RAID 1. (Technically Storage Spaces Mirror isn’t true RAID 1, but it’s the closest possible for this comparison.)
Disregard the varying partition sizes – all tests are at the start of the drives.

Individual disk

Dynamic Disk

Dynamic Disk

Seems to be good for performance, and is more portable than any hardware or fake RAID. Contrary to some outdated posts elsewhere, it does distribute reads across both drives. However it does not distribute sequential reads, so performance is not as great as it could be for some workloads.

The major drawback is how easily resync is triggered and how much it destroys performance for real-world read loads until complete. For example I loaded Divinity 2 while resyncing and it took minutes as opposed to seconds. Any time you happen to BSOD or otherwise improperly shutdown, or even hibernate (including fast startup) it triggers a resync.

Partition management is limited – the only software I could find capable of editing an existing Dynamic Disk setup was EaseUS, and when tested it froze attempting to resize an empty partition and hosed the RAID. Additionally if you have more than one partition on a dynamic disk RAID, resyncing occurs on each partition simultaneously, so on standard hard disks it destroys resync performance to have more than one partition!

Overall good performance and well-supported for future migrations, but a no-go for more than one partition, large disks, hibernation or PCs without flawless stability.

Storage Spaces

Storage Spaces (default Thin provisioning)

For the longest time I’ve been using Storage Spaces with zero data issues, but the poor performance of gaming workloads pushed me to look for alternatives. The preallocation done by Steam and Fortnite updates is extremely slow under Storage Spaces compared to bare disks(!) or any other type of RAID I’ve tried.

The benchmark results mask just how slow certain workloads can be – anything that involves reading/writing to the same file (e.g. patching) or allocating a large file using a non Windows API approach is dog slow. (OTOH fsutil file createnew is fast.)

Storage Spaces (fixed provisioning)

That said, Storage Spaces still manages to outperform a single disk slightly for reads, and even for writes, with the latter likely due to the abstraction reordering the random writes produced by CrystalDiskMark into something more optimal.

I’ve seen little discussion of Fixed provisioning of Storage Spaces online – it’s hidden behind arcane powershell commands. As it supports normal defrag (which implies a more natural on disk format) I was hoping for performance to exceed thin provisioned storage spaces, but it is actually slower across the board. There doesn’t seem to be any reason to use fixed provisioning over the default thin provisioned mode.

Speaking of on disk formats, it scares me slightly that the contents of each disk isn’t a carbon copy of the other.

Compared to dynamic disks, partition management is good – you can resize and add/remove partitions at will with even more flexibility than an ordinary disk. Tools support is good as each partition is presented to the system as a Basic disk.

The resyncing issue of Dynamic Disks doesn’t exist at all in Storage Spaces – throughout countless BSODs, resets, power losses, etc. I’ve not once seen Storage Spaces do any kind of resyncing, nor have I experienced any issues with my data stored on it.

Overall good support and a solid choice if redundancy and data protection are your aims, but the performance is lacking

Intel Rapid Storage Technology

I never thought I’d find myself considering fake RAID, but given I’m currently using a Supermicro server motherboard in my desktop with a special ‘enterprise’ incarnation of Intel RST, I figured I’d give it a shot. Note: I have reason to believe the performance of RSTe (as opposed to RST on desktop-grade motherboards) is improved, especially for RAID 1, so take these numbers with a pinch of salt.

Intel RSTe

The benchmark results for read performance eclipse the other options – even sequential reads are distributed across both disks. Writes are slower, but this is with all the dangerous write caching options disabled (I have no UPS). I can confirm these performance figures in real world performance – everything is subjectively faster than the other configurations.

I’m mildly concerned about what happens when I upgrade my PC and/or my motherboard fails, as my future plans lie with AMD. I put my mind at ease slightly when I discovered there is an option (unique to RAID 1) on RST that can turn the first disk back into a standard non-RAID disk without rewriting the data, so the on disk format must be close to an individual disk. Seeing the amount of recovery software available as well as a pure-software implementation in Linux also gives me hope I’m not digging my data into a hole. I actually feel happier with Intel RST than I would do with a hardware RAID card in this regard – at least it will work on any Intel board rather than one particular RAID controller.

This is good old fashioned fake raid, so partition management and tools support is flawless – as far as software is concerned it is a single disk.

Like Dynamic Disks, resyncing is an issue in the case of unclean shutdowns/BSODs/etc, but from my experience it only occurs in rare cases – many unclean shutdowns do not trigger it. It is also a nicer implementation – not triggered by hibernation/sleep, and it can be paused and resumed by hibernation or shutdown, as opposed to dynamic disks which will restart from scratch. It also does not attempt to resync multiple partitions at once, so it is a relatively fast linear operation. System performance impact during resyncing is roughly the same as dynamic disks – i.e. significant.

Overall performance is excellent, software support is flawless, future thinking is slightly better than true hardware RAID but behind the other software options.

Final thoughts

There’s no one-size-fits-all solution, but I’ve found myself doing a complete 180 from ‘lol fake raid who uses that’ to actually recommending Intel RST above other options for Windows.

Dynamic Disks work well until you inadvertently trigger a resync, then you spend the next day being reminded of how bad they are. The design goals behind Storage Spaces – archival and flexibility – clearly come at the cost of performance.

I’ve ignored proper RAID cards, but ultimately until Microsoft get their act together and create something on the same level of MD or BTRFS, hardware remains king. I wouldn’t be surprised if there was indeed a financial reason why software RAID is slower on Windows.


Android Pie battery drain

I’ve been experiencing chronic battery drain since updating to Android Pie, attributed to Android System, Android OS, Google Play Services.

I’ve finally found the culprit: the Pebble app. Replaced with Gadgetbridge and all is well.

The cause seems to be abuse of Google Play Services and of WebView (‘sandboxed_process0’) causing significant drain while the phone is awake and contributing to drain while locked. Probably underpinning features nobody asked for, like the Javascript SDK, PebbleKit, location-based weather, Pebble internet connectivity, etc. Gadgetbridge excludes all of that nonsense by design or as an option, don’t know why I didn’t switch sooner!

And as a handy side effect, phone calls no longer show up as from Unknown!

VW fault code P188900 / P1889 / P1888

P188900 – Coolant shut-off valve – Short circuit to ground/open circuit, Coolant shut-off valve -N82 Short circuit to ground/
open circuit

Getting this recurring fault code reported by CarPort on a Mk7 Golf. Dealership is adamant there are no faults reported via their diagnostic system, so presumably this is a false positive. I’m not getting any symptoms or warning lights whatsoever.

If you’re in the same boat, don’t bother chasing this up and don’t panic from what you’ll see on Google wrt coolant leaks destroying wires.

Veeam backups failing with network errors

I use Veeam Free and recently started getting network errors when trying to perform any backups. My destination is a linux box via samba, to an SMR disk with BTRFS – which can be quite unpredictably slow.

Errors included:

Error: Agent: Failed to process method {Transform.Patch}: An unexpected network error occurred. Failed to flush file buffers. File: [\\Redacted\Backup Job 2018-07-04T153041.vbk].

Error: An unexpected network error occurred. Failed to write data to the file [\\Redacted\Backup Job 2018-10-12T140141.vbk]. Agent failed to process method {Stg.RemoveOrphanedSessionKeys}.

Full backup file merge failed Error: Agent: Failed to process method {Transform.Patch}: An unexpected network error occurred.

Full backup file merge failed Error: An unexpected network error occurred. Failed to rename file from [\\Redacted\Backup Job.vbm_tmp] to [\\Redacted\Backup Job.vbm]. –tr:Error code: 0x0000003b –tr:FC: Failed to rename file from [\\Redacted\Backup Job.vbm_tmp] to [\\Redacted\Backup Job.vbm]. –tr:Failed to call DoRpc. CmdName: [FcRenameFile].

Very simple solution, in admin powershell:

Set-SmbClientConfiguration -SessionTimeout 600