I spent a while scratching my head over the collectd ‘disk’ plugin producing no output whatsoever on a Debian 9 server. No log output (other than successful load), no rrd files, nothing.
Frustratingly, debug logging in collectd is compiled out by default, and compiling collectd from debian source packages requires an insane number of dependencies, so I avoided that route.
Instead through analysing the source and history of the disk plugin I came across this commit. The issue turned out to be simple – in the backported kernel I was using, /proc/diskstats has a tonne more fields than the default 4.9 kernel, and this was causing the disk plugin to fall over silently.
It’s a ballache to do anything about this situation on Debian 9, assuming the backported kernel is essential. If you hit it then best just wait for Buster.
Tvheadend 4.3 adds support for custom ffmpeg parameters via the ‘spawn’ stream profile. The following command line facilitates smooth output from interlaced 25/50hz Freeview to 60hz, without introducing the soap opera effect (only simple linear blending is used).
This uses a disgusting amount of CPU time, but given there are no alternative ways of producing a watchable output on many devices, it’s well worth it. For example there is no alternative on Android, Kodi, etc. You can either have linear blending or questionable deinterlacing, but not both, and certainly not as nice deinterlacing as yadif=1.
Update: I stand corrected on the Kodi front, at least on Windows if you disable DXVA2, set output to Pixel Shaders and set deinterlacing to Deinterlace (not half), you can get high quality deinterlacing and linear blending, but it comes at a substantial CPU/GPU cost.
Quiet even at high load, more so than most desktop PCs
BIOS is absolute garbage:
No PCI-express 1.0(a) support, which prevents the use of many old low-speed PCIe cards such as NICs and TV tuners
Dell couldn’t give a toss about this, because hurr durr it’s a server who cares about standards if we can line our pockets from makers of PCIe NICs and raid cards via a little something we call Planned Obsolescence(tm)
Power saving features such as PCI ASPM are missing
Can’t disable onboard GPU in spite of a perfectly functional serial port and SoL implementation, unless a discrete GPU is present, but surprise surprise 99% of GPUs don’t work due to PCIe and power limitations.
Cannot boot from USB (or at all?) without at least one SATA device present
Add the following to somewhere in your theme’s shared functions file, or similar:
foreach (wp_scripts()->registered as &$script)
add_action('wp_enqueue_scripts', 'remove_all_scripts', PHP_INT_MAX);
The ‘is logged in’ check facilitates the wp-admin panel among other things.
Disks are 2x Western Digital Red 4TB. All configurations are RAID 1. (Technically Storage Spaces Mirror isn’t true RAID 1, but it’s the closest possible for this comparison.) Disregard the varying partition sizes – all tests are at the start of the drives.
Seems to be good for performance, and is more portable than any hardware or fake RAID. Contrary to some outdated posts elsewhere, it does distribute reads across both drives. However it does not distribute sequential reads, so performance is not as great as it could be for some workloads.
The major drawback is how easily resync is triggered and how much it destroys performance for real-world read loads until complete. For example I loaded Divinity 2 while resyncing and it took minutes as opposed to seconds. Any time you happen to BSOD or otherwise improperly shutdown, or even hibernate (including fast startup) it triggers a resync.
Partition management is limited – the only software I could find capable of editing an existing Dynamic Disk setup was EaseUS, and when tested it froze attempting to resize an empty partition and hosed the RAID. Additionally if you have more than one partition on a dynamic disk RAID, resyncing occurs on each partition simultaneously, so on standard hard disks it destroys resync performance to have more than one partition!
Overall good performance and well-supported for future migrations, but a no-go for more than one partition, large disks, hibernation or PCs without flawless stability.
For the longest time I’ve been using Storage Spaces with zero data issues, but the poor performance of gaming workloads pushed me to look for alternatives. The preallocation done by Steam and Fortnite updates is extremely slow under Storage Spaces compared to bare disks(!) or any other type of RAID I’ve tried.
The benchmark results mask just how slow certain workloads can be – anything that involves reading/writing to the same file (e.g. patching) or allocating a large file using a non Windows API approach is dog slow. (OTOH fsutil file createnew is fast.)
That said, Storage Spaces still manages to outperform a single disk slightly for reads, and even for writes, with the latter likely due to the abstraction reordering the random writes produced by CrystalDiskMark into something more optimal.
I’ve seen little discussion of Fixed provisioning of Storage Spaces online – it’s hidden behind arcane powershell commands. As it supports normal defrag (which implies a more natural on disk format) I was hoping for performance to exceed thin provisioned storage spaces, but it is actually slower across the board. There doesn’t seem to be any reason to use fixed provisioning over the default thin provisioned mode.
Speaking of on disk formats, it scares me slightly that the contents of each disk isn’t a carbon copy of the other.
Compared to dynamic disks, partition management is good – you can resize and add/remove partitions at will with even more flexibility than an ordinary disk. Tools support is good as each partition is presented to the system as a Basic disk.
The resyncing issue of Dynamic Disks doesn’t exist at all in Storage Spaces – throughout countless BSODs, resets, power losses, etc. I’ve not once seen Storage Spaces do any kind of resyncing, nor have I experienced any issues with my data stored on it.
Overall good support and a solid choice if redundancy and data protection are your aims, but the performance is lacking
Intel Rapid Storage Technology
I never thought I’d find myself considering fake RAID, but given I’m currently using a Supermicro server motherboard in my desktop with a special ‘enterprise’ incarnation of Intel RST, I figured I’d give it a shot. Note: I have reason to believe the performance of RSTe (as opposed to RST on desktop-grade motherboards) is improved, especially for RAID 1, so take these numbers with a pinch of salt.
The benchmark results for read performance eclipse the other options – even sequential reads are distributed across both disks. Writes are slower, but this is with all the dangerous write caching options disabled (I have no UPS). I can confirm these performance figures in real world performance – everything is subjectively faster than the other configurations.
I’m mildly concerned about what happens when I upgrade my PC and/or my motherboard fails, as my future plans lie with AMD. I put my mind at ease slightly when I discovered there is an option (unique to RAID 1) on RST that can turn the first disk back into a standard non-RAID disk without rewriting the data, so the on disk format must be close to an individual disk. Seeing the amount of recovery software available as well as a pure-software implementation in Linux also gives me hope I’m not digging my data into a hole. I actually feel happier with Intel RST than I would do with a hardware RAID card in this regard – at least it will work on any Intel board rather than one particular RAID controller.
This is good old fashioned fake raid, so partition management and tools support is flawless – as far as software is concerned it is a single disk.
Like Dynamic Disks, resyncing is an issue in the case of unclean shutdowns/BSODs/etc, but from my experience it only occurs in rare cases – many unclean shutdowns do not trigger it. It is also a nicer implementation – not triggered by hibernation/sleep, and it can be paused and resumed by hibernation or shutdown, as opposed to dynamic disks which will restart from scratch. It also does not attempt to resync multiple partitions at once, so it is a relatively fast linear operation. System performance impact during resyncing is roughly the same as dynamic disks – i.e. significant.
Overall performance is excellent, software support is flawless, future thinking is slightly better than true hardware RAID but behind the other software options.
There’s no one-size-fits-all solution, but I’ve found myself doing a complete 180 from ‘lol fake raid who uses that’ to actually recommending Intel RST above other options for Windows.
Dynamic Disks work well until you inadvertently trigger a resync, then you spend the next day being reminded of how bad they are. The design goals behind Storage Spaces – archival and flexibility – clearly come at the cost of performance.
I’ve ignored proper RAID cards, but ultimately until Microsoft get their act together and create something on the same level of MD or BTRFS, hardware remains king. I wouldn’t be surprised if there was indeed a financial reason why software RAID is slower on Windows.
3 months later…
An update on this: I’ve decided to move my RAID storage to BTRFS RAID 1 and access it on Windows via SMB. Performance is remarkably good – despite benchmarking slower than any of the options here, real world use is at least as good as Storage Spaces. Annoyingly some software (Epic Games launcher, Battle.net, etc) refuses to work on a network share, but Steam works well. The integrity checking of data on BTRFS trumps all these issues in my book, alongside regular backups in case BTRFS itself falls over. Plus with zstd compression and subvolumes (shared free space), I was able to condense 6TB worth of disks into 4TB.
During the process of moving, I discovered I could mount one of the Intel RAID 1 disks on Linux directly as if it were a standalone disk (with no additional configuration) – so this is a huge positive for the safety of data on Intel RAID 1 in case of motherboard failure.
I’ve been experiencing chronic battery drain since updating to Android Pie, attributed to Android System, Android OS, Google Play Services.
I’ve finally found the culprit: the Pebble app. Replaced with Gadgetbridge and all is well.
And as a handy side effect, phone calls no longer show up as from Unknown!
P188900 – Coolant shut-off valve – Short circuit to ground/open circuit, Coolant shut-off valve -N82 Short circuit to ground/ open circuit
Getting this recurring fault code reported by CarPort on a Mk7 Golf. Dealership is adamant there are no faults reported via their diagnostic system, so presumably this is a false positive. I’m not getting any symptoms or warning lights whatsoever.
If you’re in the same boat, don’t bother chasing this up and don’t panic from what you’ll see on Google wrt coolant leaks destroying wires.