Skip to content

Updates to SPEC CPU benchmark scripts

Mahesh Madhav requested to merge spec_validate into master

These are the exact cmdlines being used right now. The run_spec.sh reflects the workloads and switches, and should run inside the benchmark directory.

Please take a look and modify based on your own understanding. I set some absolute numbers for expected vertex/node counts based on my own runs. These all verify for me except the ones below. These are 2D meshes without a volume, so the calculations are off.

TieAnchor520.val:Error: Min mesh quality is 0.0238612, outside of range
mediterranean.val:Error: Min mesh quality is 0.0942114, outside of range
projection.val:Error: Min mesh quality is 0.0181333, outside of range
sphere-discrete.val:Error: Min mesh quality is 0.00735418, outside of range

Additionally, please see the cmdlines that use the -check switch. How is that check different than what you have added in the .geo scripts? Is one style better than the other? I cannot add -check to the other cmdlines, because they actually throw warnings or failures from that [more stringent?] checking.

Also, it appears that all this checking is increasing the runtime of our benchmarks. How representative is mesh checking in the real world? How often do people verify their mesh with this kind of detailed analysis? Some committee members are asking about this. I noticed that the multithreaded benchmarks can use thread scaling on the meshing, but only a single thread for checking.

Merge request reports

Loading