While I am experimenting with different ways of coding things it would be nice if I could get some idea of how one version of the code compares against some other versions of the code in terms of ‘work done’.
I know I can get an idea of timing differences by using a Stopwatch but it’s difficult for me to know when my machine isn’t doing stuff in the background which might put those timings ‘off’ in some way.
It would be nice if I could run some code and get a pretty good (doesn’t need to be exact) idea of the number of processor instructions which were executed by the thread(s)/core(s) running the FSI code.
That way I could, reasonably, get some idea of which version of the code was more ‘efficient’ than the others.
I’m aware of BenchmarkDotNet but this (as far as I know) requires me to create a Console App and run the code on a ‘quiesced’ machine to get good timing data – I only have the one machine and I have little control over what it’s doing in the background, and even running small benchmarks can take a long time what with the different phases etc.
I realise that ‘processor instructions performed’ isn’t a good real-world measure but it, or something like it, will help me to have some idea about how much relative work is going into the processing, e.g. 50,000 instructions versus 200,000 instructions.
Is this sort of thing possible?
Note: I’m using Visual Studio 2022 on Windows 10.