I do not have anything special about I/O scheduler and ext4 tuning
measurements. The few ones found on the Internet usually suggest
Deadline over CFQ scheduler to reduce latency and increase throughput,
and `noatime` over `relatime` as ext4 option to minimise writes on SSD.
But I've also read that CFQ scheduler was tweaked starting with linux
kernel 4.2 in order to offer better performances with SSDs. This is
probably the reason why CFQ scheduler is the default one used on Clear
Linux. So, it could be great to share your own measurements in order to
document choices done with Clear Linux...
Actually, my main concern now is more about user's custom settings to
be integrated in systemd-boot's entry configuration files each
time the kernel is updated. Maybe I'd better go with another thread
On Sun, 2016-05-15 at 17:26 -0700, Arjan van de Ven wrote:
On 5/15/2016 8:08 AM, fb.dev.clx wrote:
> Hi all,
> Actually, as I want to pass kernel's boot options for the root file
> system and the I/O scheduler, I'd probably better go with `systemd-
> boot` configuration.
> It seems that swupd-client calls some post-update helper scripts
> kernel boot configuration each time there is a new kernel... So,
> can I customize default kernel's boot options for systemd-boot and
> swupd-client's helper scripts?
btw the io scheduler is a runtime tunable.....
but we spent a bunch of time measuring things, and we ended up with
based on data on SSDs.... do you have any data to suggest that other
are better? (we're always open to suggestions ;-) )