On Sun, Sep 22, 2019 at 11:01 PM Auke Kok <auke-jan.h.kok(a)intel.com> wrote:
On 9/22/19 5:23 PM, Alexander Zaitsev wrote:
> Clear Linux is known as Linux distribution which aims to high
> performance. One of the way to achieve this - use PGO (e.g. as you do it
> for Python:
Glad you like the blog, thanks
> The main problem is only few open source applications prepare some test
> load which can be used for compilation programs with PGO. I think we can
> try to change it.
> The biggest issue here can be that we don't know usual load for
> different applications - are there any possible ways to fix it? E.g. try
> to collect some statistics from users or something like that.
In Clear Linux we choose "generic" workloads for our PGO enabled
packages. For instance, in php, we use phpbench. For bzip2, we compress
a bunch of files that we think are a reasonable set.
I agree with Auke in this point, in Clear Linux we choose "generic"
workloads for our PGO enabled packages.
This is with the intention to improve the performance of most common
parts of the code, however for some specific applications is necessary
to train PGO with the target benchmark.
It's entirely reasonable to attempt to push upstream projects to
this and include a "reasonable" workload. However, not every workload is
typical for everyone's use case. Someone might want to profile for a
specific workload. So it's entirely reasonable that some projects say
"you have to choose your own".
Agree, we use generic and we encourage people (the reason why we
document this on blogs and tutorials ) how to do it for their own
In Clear Linux, we can avoid that discussion - we can and probably
always should pick a "reasonable" workload that likely does well in the
Ideally, upstream projects carry several work loads and "benchmark" them
to detect regressions at the same time.
You'd have to do this work for each project where it makes sense. Clear
Linux can be hopefully an example, but each project has to decide for
Dev mailing list -- dev(a)lists.clearlinux.org
To unsubscribe send an email to dev-leave(a)lists.clearlinux.org
Regarding to your questions :
1) Do you like the idea in pushing project authors with preparing usual
Agree, if authors create an extra rule where the benchmark could be an
argument, that might make the things easier.
2) In which ways it can be performed?
Do you mean, in which ways the load script could be performed?. PGO
has 3 main core phases:
Build an instrumented version of the program
Run the benchmark to collect the execution profile.
Rebuild the source using the profile data
If the rules to Build an instrumented binary and rebuild with the
profile data are provided by the build infrastructure ( make / cmake /
.. ) that would make the things easier.
I am glad you like Clear approach for PGO.