This page compares the performance of the following SML compilers on a range of benchmarks. All of the benchmarks are available here. Some of the benchmarks were obtained from here. Some of the benchmarks use data files in the DATA subdirectory.

Setup

All benchmarks were compiled and run on a 733MhZ P6 with 512M of physical memory. The benchmarks were compiled with the default settings for all the compilers, except for Moscow ML, which was passed the -orthodox -standalone -toplevel switches. The SML/NJ executables were produced by wrapping the entire program in a local declaration whose body performs an exportFn.

Run time ratio

The following table gives the ratio of the run time of each benchmark to the run time when compiled by MLton. A * in an entry means that the compiler failed to compile the benchmark or that the benchmark failed to run.

benchmark ML Kit Moscow ML SML/NJ
barnes-hut 3.5 12.2 0.8
checksum * * 3.3
count-graphs 6.7 21.9 1.5
fft 8.1 26.3 1.0
fib 0.9 4.8 1.1
hamlet * 9.2 1.4
knuth-bendix 4.8 12.6 2.1
lexgen 2.4 5.6 1.6
life 6.9 20.0 0.9
logic * 3.5 0.6
mandelbrot 4.2 32.7 1.4
matrix-multiply 10.1 35.3 3.0
md5 * * 3.0
merge * * 10.7
mlyacc * 7.7 1.7
mpuz 8.8 47.9 3.0
nucleic * 15.8 0.7
peek 5.4 28.4 1.9
psdes-random 8.7 * 2.3
ratio-regions 12.4 47.0 5.7
ray * 11.8 0.9
raytrace * * 2.3
simple 2.3 12.5 1.3
smith-normal-form * * 96.4
tak 1.7 7.8 1.4
tensor * * 5.9
tsp 2.4 14.4 1.6
vector-concat 11.5 20.5 7.1
vector-rev 12.4 39.1 23.1
vliw 2.6 7.8 1.3
wc-input1 13.4 * 7.3
wc-scanStream 8.0 * 1.6
zebra 11.8 23.3 6.9
zern * * 1.5

Compile time

The following table gives the compile time of each benchmark in seconds. A * in an entry means that the compiler failed to compile the benchmark.

benchmark ML Kit MLton Moscow ML SML/NJ
barnes-hut 8.1 2.6 0.8 1.7
checksum * 0.7 * 0.2
count-graphs 2.6 1.8 0.2 1.1
fft 2.1 1.5 0.2 1.0
fib 1.0 0.7 0.1 0.2
hamlet * 52.3 43.2 91.6
knuth-bendix 5.3 2.5 0.4 2.2
lexgen 10.2 5.8 0.8 4.8
life 2.8 1.5 0.2 0.7
logic 6.8 7.5 0.4 2.0
mandelbrot 1.1 0.8 0.1 0.2
matrix-multiply 1.2 0.8 0.1 0.3
md5 * 2.4 * 2.1
merge 1.0 0.8 0.1 0.2
mlyacc 60.4 19.4 7.4 24.1
mpuz 1.4 1.0 0.1 0.4
nucleic 28.2 4.5 2.1 3.0
peek 1.1 1.1 0.1 0.2
psdes-random 1.1 0.8 * 0.3
ratio-regions 4.4 3.1 0.4 2.1
ray 3.7 3.6 0.2 1.1
raytrace * 10.1 * 6.8
simple 14.6 7.4 0.9 4.5
smith-normal-form * 8.1 * 3.5
tak 1.0 0.7 0.1 0.2
tensor * 3.1 * 3.5
tsp 2.7 1.8 0.3 0.8
vector-concat 1.0 0.8 0.1 0.2
vector-rev 1.0 0.8 0.1 0.2
vliw 36.2 12.1 3.0 18.8
wc-input1 1.1 1.7 0.1 0.3
wc-scanStream 1.1 1.8 0.1 0.3
zebra 2.8 5.0 0.2 0.8
zern * 1.1 * 0.8

Code size

The following table gives the code size of each benchmark in bytes. The size for MLton and the ML Kit is the sum of text and data for the standalone executable as reported by size. The size for Moscow ML is the size in bytes of the executable a.out. The size for SML/NJ is the size of the heap file created by exportFn and does not include the size of the SML/NJ runtime system (approximately 95K). A * in an entry means that the compiler failed to compile the benchmark.

benchmark ML Kit MLton Moscow ML SML/NJ
barnes-hut 179,980 45,835 94,990 331,768
checksum * 22,994 * 332,504
count-graphs 109,948 42,418 84,575 355,376
fft 107,284 33,114 84,095 332,808
fib 68,588 22,842 79,878 310,968
hamlet * 977,217 277,168 1,263,816
knuth-bendix 115,252 62,139 88,439 321,504
lexgen 227,444 132,090 104,883 390,136
life 100,220 39,642 83,390 305,120
logic 136,068 150,658 87,252 331,744
mandelbrot 101,612 22,906 81,341 311,992
matrix-multiply 118,348 23,466 81,879 338,632
md5 * 36,187 * 331,792
merge 68,796 23,962 80,091 307,904
mlyacc 525,404 425,226 148,286 700,456
mpuz 89,500 27,970 82,381 320,184
nucleic 234,220 60,066 207,154 354,288
peek 77,420 30,539 81,618 311,016
psdes-random 84,540 24,178 * 313,016
ratio-regions 111,308 59,378 87,485 335,856
ray 123,356 73,377 89,860 384,072
raytrace * 180,902 * 510,056
simple 194,364 170,906 94,397 641,056
smith-normal-form * 143,386 * 483,400
tak 68,356 22,866 79,928 306,872
tensor * 53,802 * 342,048
tsp 115,732 39,067 86,140 322,552
vector-concat 77,660 23,426 80,191 317,128
vector-rev 77,860 23,482 80,073 317,128
vliw 418,076 264,234 135,386 618,576
wc-input1 144,652 43,075 86,900 311,992
wc-scanStream 145,100 45,675 87,076 313,016
zebra 85,404 111,131 83,419 310,256
zern * 27,041 * 337,944


Last modified: Tue Jul 10 15:05:24 PDT 2001