Various benchmarks comparing the web, file, and database server
performance of NT and Linux have been performed recently. This page
summarizes the results of some of those comparisons, and shows
interesting graphs (if available) from each benchmark effort.
See the updated article at
itweek.co.uk, which says
"Samba 3 extends lead over Win 2003
By Roger Howorth [14-10-2003]
The latest Samba release shows Windows a clean pair of heels in file and print peformance
Tests by IT Week Labs indicate that the latest version of the open-source
Samba file and print server software has widened the performance gap
separating it from the commercial Windows alternative.
The latest benchmark results show an improvement over [Samba 2], which
performed twice as fast as Windows 2000 Server when it was tested by
IT Week Labs last year. Overall, it now performs 2.5 times faster than
Windows Server 2003.
In terms of scalability, the gains of upgrading to Samba 3 are even more
striking. Last year we found that Samba could handle four times as many
clients as Windows 2000 before performance began to drop off. This year
we would need to upgrade our test network in order to identify the point
where Samba performance begins to fall in earnest.
The IT Week Labs tests used Ziff-Davis NetBench file server benchmark
with 48 client systems. We selected a low-specification but otherwise
modern server for our tests. We used an HP ProLiant BL10 eClass Server
fitted with a 900MHz Pentium III chip, a single 40GB ATA hard disk and
512MB of RAM. We did not tune any of the software to improve performance.
Each NetBench client makes a constant stream of file requests to the
server under test, whereas in real-world environments many users would
remain idle for long periods. Consequently our test environment simulates
the workload of some 500 client PCs in a typical production environment."
Here's the
graph from the print version of the article
(thanks to the ITWeek staff for posting a link at
lwn.net):
This is so different from the May 2003 Veritest results,
one hardly knows where to start. Perhaps it's enough
to note that the ITWeek configuration corresponds to what
really small businesses might do, whereas the Veritest
test corresponds to what a large company might try for
a central server if they were too lazy to install
the latest Samba (rather unlikely...)
In May 2003, Microsoft hired Veritest to run the
netbench file serving benchmark to compare the CIFS file
serving performance of Windows 2003 Enterprise Edition Release
Candidate 2 against Red Hat Advanced Server 2.1. Veritest's Microsoft
reports page links to the
benchmark results in PDF format.
The server machine was an HP DL760 or DL380 equipped with 1, 2,
4, or 8 Pentium III Xeon CPUs, and a matching number of Intel
PRO/1000 MF gigabit ethernet cards. (HP has a nice Linux support
page for both the
DL380 and
DL760.) Each gigabit card was connected to a switch, which was
connected via 100baseT to 15 or 30 client systems. Throughput was
measured at 1, 8, 16, ... active clients, up to the number
physically present. (See the graphs of throughput vs. # of clients
on pages 10-11 of the report, or see Joe Barr's extracted graph
of the same for 4 processors.) Here's a table describing the
setup and results (figures only accurate to 20Mb/sec, as they had
to be read off the imprecise graphs in the Veritest report):
|
Linux |
Windows |
Server |
cpus |
interfaces |
clients/interface |
peak at |
Mb/s at peak |
Mb/s at full load |
peak at |
Mb/s at peak |
Mb/s at full load |
DL380 |
2 |
4 |
30 |
|
350 |
|
|
700 |
DL760 |
1 |
2 |
30 |
16 |
244 |
210 |
24 |
453 |
350 |
DL760 |
2 |
4 |
30 |
16 |
385 |
320 |
32 |
632 |
560 |
DL760 |
4 |
4 |
30 |
24 |
462 |
410 |
48 |
901 |
710 |
DL760 |
8 |
8 |
15 |
32 |
657 |
590 |
80 |
1088 |
950 |
Veritest used the peak numbers to conclude that, at 8 processors,
Windows was 1088Mb/s / 657Mb/s = 1.66 times faster than Linux. It
would be equally fair to take the fully loaded results, and
conclude that Windows was 950Mb/s / 590Mb/s = 1.61 times faster
than Linux.
Veritest (aka Lionbridge) has a rather
cozy relationship with Microsoft, so Microsoft's claim that the
tests were done by a truly independent organization is somewhat
misleading. Nevertheless, the benchmark does not appear manifestly
unfair.
Issues that may have affected performance:
- CPU affinity problem: Veritest commented that, in the 4
processor case, Red Hat Advanced Server 2.1 was unable to assign
each of the four network cards its own IRQ line, so two of the
network cards had to share an IRQ line. This did not affect the
results at 1, 2, or 8 processors.
- Filesystem settings: It's not clear whether they turned off the
noatime option in Linux. This option is commonly disabled on Linux
fileservers; see
Securing and Optimizing Linux: RedHat Edition - A Hands on
Guide. I've sent email to Veritest requesting clarification.
This and several other settings were mentioned in ranger's comment to the lwn
article.
- ext3: As Randy
Hron's recent
local file system benchmarks on the 2.5.69 Linux kernel show,
ext3 is about 12% slower than xfs. "ext3 lock_kernel removal" is on
the
2.6 must-fix list, perhaps that will help. See also
Martin Bligh's note about ext3's poor scalability to large numbers
of CPUs, and a similar set of
benchmarks run in 2001.
- RAID0 vs. RAID5: see Joe Barr's article
in LinuxWorld, which suggests that if raid5 had been chosen,
results would have been more favorable to Linux.
I'm looking forward to counter-benchmarks from the Linux community.
(by OSDL, perhaps?)
Comments from the community:
Until recently, these tests mostly used ZD Labs' WebBench, which
measures how fast a server can send out static HTML pages from a 60
megabyte document tree, and all of the tests have been run on a
local LAN, with no effort made to model the large number of
simultaneous slow connections that one finds on most web servers
accessed via the Internet. Also, all but one of these tests has
used exclusively static web pages, whereas dynamically generated
web pages are standard fare on big web sites.
Happily, the SPECweb99 is becoming more
popular; it limits each client to 400kbits/sec, uses a mix of
static and dynamic web fetches from a document tree too large to
cache in memory, and scores web server by how many clients they can
handle without any dropping below 80% of 400kbits/sec. This is a
much more realistic test -- and harder to fudge, since you are only
allowed to report results if you follow a fairly stringent set of
test guidelines.
Another benchmark of interest is TPC-W. See tpc.org for more information. TPC-W
models a range of typical e-commerce-oriented web sites, and will
require a database (as one would expect from a benchmark by the TPC
folks). It's more expensive to run, as you must hire a TPC-approved
auditor to audit your benchmark results. No results yet for
Linux.
Benchmark Results
- ZD (Sm@rt Reseller), January
1999
- Mindcraft, April 1999
- PC Week, 10 May 1999
- ZD Labs, 14-19 June 1999
- c't Magazin, June 1999
- PC Magazine, September
1999
- IBM / SPECweb99, November
1999
- IBM / SPECweb99, December
1999
- IBM / SPECweb99, February
2000
- PC Week, 17 December 1999
- Dell / SPECweb99, June
2000
- Dell / SPECweb99, July
2000
- Dell / SPECweb99, November
2000
- Dell / SPECweb99, April
2001
- Ziff-Davis/eWeek, June
2001
- IBM, June 2001
Date |
SPECweb99 |
CPU |
L2 |
RAM |
Doctree |
software |
org |
Details |
04/2001 |
8001 |
4 700MHz PIII Xeon |
2MB |
32GB |
26GB |
W2k + IIS5 + SWC3 |
Dell |
Details |
11/2000 |
7500 |
4 700MHz PIII Xeon |
2MB |
32GB |
22GB |
2.4.? + Tux2.0 |
Dell |
Details |
06/2001 |
3999 |
2 900MHz PIII Xeon |
2MB |
16GB |
13GB |
2.4.2 + Tux2.0 |
IBM |
Details |
06/2001 |
3227 |
2 1.133GHz PIII |
512KB |
4GB |
10GB |
2.4.? + Tux2.0 |
IBM |
Details |
06/2001 |
2799 |
1 900MHz PIII Xeon |
2MB |
8GB |
8.8GB |
2.4.2 + Tux2.0 |
IBM |
Details |
03/2001 |
2499 |
2 1GHz PIII |
256KB |
4GB |
8.1GB |
W2k + IIS5 + SWC3 |
HP |
Details |
06/2001 |
1820 |
1 1.133GHz PIII |
512KB |
4GB |
5.1GB |
2.4.? + Tux2.0 |
IBM |
Details |
Sm@rt Reseller's January 1999 article, "Linux
Is The Web Server's Choice" said "Linux with Apache beats NT
4.0 with IIS, hands down." The companion
article said unequivocally "The bottom line, according to our
hands-on analysis, is that commercial Linux releases can do much
more with far less than Windows NT Server can." ... "According to
ZDLabs' results (see test charts), each of the commercial Linux
releases ate NT's lunch."
Hardware: Single 100baseT + Single 266MHz CPU + 64MB RAM + 4GB IDE
disk
Software: Linux 2.0.35 + Apache 1.31 vs. Windows NT Server 4.0 SP4
+ IIS 4.0
Note that the number of requests per second in this test is
quite low compared to other benchmarks below. This is partly
because the 60 megabyte document tree used in this test didn't fit
into main memory, so the server was swapping to disk -- a situation
in which Linux seems to outperform NT, especially when testing
out-of-the-box untuned configurations. Compare with the 128 megabyte RAM test PC Magazine did in
September 1999, which also showed Linux beating NT under low
RAM, untuned conditions.
In March 1999, Microsoft commissioned Mindcraft to carry out a comparison
between NT and Linux. The comparison used ZD Labs' WebBench. The
server was configured with four CPUs and four fast Ethernet cards,
and for NT, each CPU was bound directly to one of the Ethernet
cards. In this configuration, NT's WebBench score was 2 to 3 times
higher than Linux's WebBench score. Also, Linux experienced a
disasterous falloff in performance above a certain point:
Hardware: Quad 100baseT + Quad 400MHz Pentium II Xeon + 1MB L2
Cache + 1GB RAM
Software: Linux 2.2.2 + Apache 1.3.4 vs. NT 4.0 EE + Service pack
3
Note the much higher scores in this test than in the January tests by Sm@rt Reseller; this is due to
the much faster CPUs, better tuning, and the 1 gigabyte of RAM (16
times as much as the earlier test), which allowed the 60MB document
tree to fit many times over in RAM, probably reducing disk I/O to
zero.
See also:
On 10 May 1999, PC Week (Ziff-Davis) compared
the http and smb serving performance of Linux 2.2.7, Solaris
2.7, NT 4.0, and Netware 5.0 on a quad 500MHz Pentium III server.
The comparison also used ZD Labs' WebBench. ZD was careful during
the web testing to avoid any disk I/O, and said that 2.2.7 gave
much better performance than 2.2.5, that they did edit the Apache
conf files, and that they used the top_fuel
patch. The http
results show performance for all four operating systems about
equal and increasing linearly with number of clients initially;
Linux/Apache flattened out at about 28 clients; NT/IIS flattened
out at about 40 clients; and Solaris/SWS was still increasing
nearly linearly at 60 clients.
Hardware: Quad full duplex 100baseT + Quad 500MHz Pentium 3 +
512KB? L2 Cache + 1GB RAM
Software: Linux 2.2.7 + Apache (unknown version) + top_fuel
(Linux
tuning info) vs. NT 4.0 Workstation (NT
tuning info)
ZD Labs hosted a repeat of
Mindcraft's benchmark, and Red Hat sent Zach Brown and Doug Ledford
to help out. Once again, the benchmark used was ZD Labs' WebBench.
Here are links to coverage in the press:
The results differ from the April test in two key ways: First,
Linux now equals NT's performance in the region in which the load
is too light to require multiple processors on the server. (In the
April tests, Linux had lagged behind even under light load.)
Second, above this region, Linux's performance remains constant
rather than falling to zero as it did in April. My guess is that
this is due mostly to fixes in the Linux kernel.
NT is still about 1.5 times as fast as Linux on single-processor
systems under heavy load, and 2.2 times as fast on SMP systems
under heavy load. (This gap between NT and Linux is much narrower
than in April's tests, which showed NT being about 30 times as fast
under really heavy load.)
Hardware: Quad 100baseT + Quad 400 MHz Pentium II Xeon + 2MB L2
Cache + 1GB RAM
Software: Linux 2.2.6 + Apache 1.3.6 + top_fuel
+ mod_mmap_static
(tuning
info) vs. NT 4.0 EE + Service Pack 4 (Tuning
info)
Here's a graph from Mindcraft's
writeup showing all their results together:
Phase 1 corresponds to their Second MS-hosted test using 2.2.6 (but
run at ZD), Phase 2 is the same thing, but with better tuning, and
Phase 3 is with the latest OS (2.2.10) and patches. Note the Phase
1 results don't show the same performance dropoff as did the MS-hosted
tests. This probably means the Microsoft testbed somehow causes
very poor Linux performance, possibly by tickling the TCP bug that was fixed in the 2.2.7
kernel.
In Phase 3, Linux showed 14% better performance than in the
earlier phases, showing that tuning was not a major problem with
the original tests.
It's clear that, in multi-CPU, multi-ethernet performance,
Solaris really shines, NT does pretty well, and Linux 2.2.6 does
poorly. In fact, during the tests, they tried an alternate
high-performance Web server program for Linux, Zeus, and found that
it had the same problems as Apache. This means the performance
problems were probably mostly in the Linux kernel. Zach Brown
profiled the kernel during the test, and saw that four Fast
Ethernet cards on a quad SMP system exposes a bottleneck in Linux's
interrupt processing; the kernel spent a lot of time in
synchronize_bh(). Linux would probably perform much better on this
test if a single Gigabit Ethernet card were substituted for the
four Fast Ethernet cards.
c't Magazin ran very interesting benchmarks of
Linux/Apache and NT/IIS on a quad Pentium 2 Xeon system. These
tests used custom benchmark software written by c't (available for
download). Like WebBench, this test used a small document tree
(10,000 4KB files); unlike WebBench, these tests also used a second
document tree (1,000,000 4KB files) that was too large to fit in
main memory, which tests the disk subsystem and caching behavior of
the operating system.
See also IT
Director's summary of the c't tests.
Here's their graph of performance on a single-CPU system on
small sets (10^4) and large sets (10^6) of files:
(They didn't seem to repeat the same graph on a multi-CPU or
multi-ethernet system, darn it...)
Here's their graph of performance on 1 and 4 CPU systems with a
single Ethernet card on extremely small sets of files (a single 4
kilobyte file!):
In all their single fast ethernet card tests on non-trivial
document trees, Linux equalled or beat NT. When a second fast
ethernet card was hooked up, though, NT beat Linux.
PC Magazine
compared the http performance of up-to-date but untuned
single-CPU, 128MB RAM servers (again with WebBench), and found that
NT did a lot more disk accesses than Linux, which let Linux score
about 50% better than NT. Here's their graph:
(They noted that "A savvy Windows NT administrator could make some
simple tweaks to bring that OS's performance in line with a
comparable Linux server," but since they wanted to show how the
servers performed without special tweaking, they didn't report a
graph of tuned results.)
Compare with the 64 megabyte RAM test
Sm@rt Reseller did in January 1999, which also showed Linux
beating NT under low RAM, untuned conditions.
IBM achieved a score of 404 SPECweb99 on a Netfinity 5000 with 2 x
600 MHz Pentium III 512 KB half-speed L2 Cache, 100 MHz bus, 2GB
RAM, 40GB disk space, Red Hat Linux 6.1, the Zeus web
server, and (although it's hard to imagine) three Gigabit Ethernet
cards. The document tree was 1.4 gigabytes -- potentially small
enough to fit into RAM.
Details at SPEC.org.
IBM achieved a score of 545 SPECweb99 on a Netfinity 5600 with a
single 533 MHz Pentium IIIEB 256 KB fullspeed L2 Cache, 133 MHz
bus, 1.5GB RAM, 36B disk space, Red Hat Linux 6.1,
the Zeus web server, and a single Gigabit Ethernet card. The
document tree was 1.8 gigabytes -- too large to fit into RAM.
Details at SPEC.org.
IBM achieved a score of 710 SPECweb99 on a Netfinity 5600 with a
single 667 MHz Pentium IIIEB 256 KB fullspeed L2 Cache, 133 MHz
bus, 2.0GB RAM, 45MB disk space, Windows 2000 Advanced
Server, the IIS5.0 web server, and a single Gigabit Ethernet
card. The document tree was 2.4 gigabytes -- too large to fit into
RAM.
Details at SPEC.org.
This CPU's clock rate was 25% higher than the one used in the
December test. If performace scales linearly with processor speed,
then Red Hat Linux 6.1 would have scored about 680 on this
hardware, or about 4% slower than Win2K+IIS.
In PC
Week Labs' 17 December 1999 tests with WebBench, Windows 2000
scored about 25% higher than NT 4.0:
Hardware: ?? 100baseT + Compaq
6400R 2 or 4 CPU 500 MHz Pentium III Xeon system + ?? L2 Cache
+ 2GB RAM
Dell achieved a score of 4200 SPECweb99 on a
PowerEdge 6400/700 with four 700 MHz Pentium III 2MB L2 Cache,
133 MHz bus, 8.0GB RAM, 45MB disk space, Red Hat Linux
6.2 "Threaded Web Server Add-On" with the TUX web server, and
four Gigabit Ethernet cards. The document tree was 13.6 gigabytes
-- too large to fit into RAM.
Details at SPEC.org.
A few notes: the 4 processor test was done with a huge L2 cache (2
MB), whereas the 1 and 2 processor tests were done with the
full-speed but small 256KB L2 cache. Even so, the scaling (1/2/4
CPU system scores were 1270/2200/4200 = 1.0/1.7/3.3) isn't bad.
TUX ("Threaded LinUX web server"), the server software used in
this test, is a 2.4 kernel based kernel-mode http server meant to
be an Apache addon. It was written mostly by Ingo Molnar; see his
note on Slashdot and his replies to questions on
Linux Today.
Ingo's
September 1, 2000 announcement says an alpha version of TUX can
be downloaded from ftp://ftp.redhat.com/pub/redhat/tux,
and explains how to join a mailing list for more info.
TUX has achieved quite an impressive SPECweb99 score, the
highest ever recorded, I think. See the
benchmark notes.
The big news is that although Win2000/IIS 5.0 on the same
hardware turned in a good score of 732 with 1 processor, it did not
scale well, scoring only 1600 with 4 processors (2.2 times faster
than 1 CPU) -- even though the document tree for the Win2000 run
was only 5.2 gigabytes, small enough to fit in RAM. Overview and
details at SPEC.org.
Dell achieved a score of 6387 SPECweb99 on a
PowerEdge 8450/700 with eight 700 MHz Pentium III 2MB L2 Cache,
32.0GB RAM, 45MB disk space, Red Hat Linux 6.2 "Threaded
Web Server Add-On" with the TUX web server, and eight Gigabit
Ethernet cards. The document tree was 21 gigabytes -- small enough
to fit into RAM.
Details at SPEC.org.
Dell achieved a score of
7500 SPECweb99 with TUX 2.0 on Red Hat 6.2 Enterprise running
on a
PowerEdge 8450/700 with 32 GB RAM, powered by eight 700MHz
Pentium III Xeon's (2MB L2 cache each), eight Alteon ACEnic 1000-SX
gigabit Ethernet cards, and 5 9GB 10KRPM drives. The document tree
was 22 GB -- small enough to fit into RAM.
Dell achieved a score of
8001 SPECweb99 with Microsoft's IIS 5.0 and SWC 3.0 on Windows
2000 Datacenter running on a
PowerEdge 8450/700 with 32 GB RAM, powered by eight 700MHz
Pentium III Xeon's (2MB L2 cache each), eight 3Com 3C985B-SX
gigabit Ethernet cards, and 7 9GB 10KRPM drives plus one 18GB 15K
RPM drive. The document tree was 26 GB -- barely small enough to
fit into RAM.
eWeek published two articles in June 2001:
They seem like good technology stories, but as benchmark result
articles, they were somewhat short on details. The print version
seems to have more detail; in a segment titled "... details of
benchmark tests: [putting] super-speedy Web server through all its
paces was greatest challenge" (I'm working from a faxed page here,
sorry). It explains that:
- the server was a Dell PowerEdge 6400 with two 700 MHZ Pentium 3
Xeon CPUs, two gigabit NICs, and 2 GB of RAM
- The OS was either Red Hat 7.1 upgraded to the 2.4.5 kernel, or
Windows 2000 Server with Service Pack 1
- originally the test was run with four CPUs rather than two, but
they couldn't max out Tux like that, so they had to pull two of the
CPUs! Looks like Tux scales nearly linearly to four CPU's.
- the test used a network of 80 workstations running WebBench as
the client load generator
- they worked with engineers from Dell, Red Hat, and Microsoft to
tune the servers and write the server modules to handle the dynamic
part of the test; they didn't use CGI as it was too slow to be an
appropriate test for fast servers. Supposedly this code is
available at www.eweek.com/links but it wasn't
there as of 23 June 2001.
The story has a graph of requests per second vs. client load for
four server combinations. It shows linear scaling up to 20 clients
for Apache alone, to 21 clients for IIS alone, to 45 for Tux with
Apache, and to 50 for Tux alone. (And with four CPUs, Tux scaled
nearly linearly past 80 clients, as noted above.) Can anyone
send me a URL for this graph?
I would have expected them to also test Microsoft's
SWC (Scalable Web Cache), which is a bit like Microsoft's
answer to Tux; IIS 5.0 can't get good SPECweb99 scores without
it.
(Note: eWeek is part of Ziff-Davis, but not really part of
zdnet, which is now owned by cnet... see
Tim Dyck's comment at LinuxToday. Hard to keep track of the
players these days.)
IBM achieved a score of 3999 SPECweb99 on a IBM eServer xSeries
370 with a dual 900 MHz Pentium III Xeon, 2MB L2 cache,
16.0GB RAM, 90MB disk space, Red Hat Linux 7.0 with
Threaded Web Server Addon, the Tux 2.0 web server, and four
Gigabit Ethernet cards. The document tree was 12.9 gigabytes --
small enough to fit into RAM.
Details at SPEC.org.
IBM also achieved a score of 2700 SPECweb99 on a IBM eServer xSeries
370 with a single 900 MHz Pentium III Xeon, 2MB L2 cache,
8.0GB RAM, 90MB disk space, Red Hat Linux 7.0 with
Threaded Web Server Addon, the Tux 2.0 web server, and three
Gigabit Ethernet cards. The document tree was 8.8 gigabytes -- too
large to fit into RAM.
Details at SPEC.org.
The TPC family of benchmarks has the interesting feature that the
price of the server and its software is part of the benchmark
(showing that the target audience is managers, not geeks!); to
compare an 8-CPU server against a 16-CPU server, compare not the
raw score (queries per hour), but the cost per score unit (dollars
per queries per hour).
In September 2002, the first TPC/C benchmark results for a Linux
database system were reported. It's been a long time coming because
these tests are quite expensive to conduct, and vendors wanted to
make darn sure Linux would perform well. It was worth the wait!
More recently, Intel has mentioned that it is
working on a
32 processor TPC/C benchmark with Linux. No results yet.
Here are the top ten clustered TPC/C results
by price/performance and
by performance.
In this test, Linux beat Windows 2000 Advanced Server by a hair
both in absolute performance and in price/performance. Looks like
they're pretty closely matched. For those interested in what new
features of Linux helped performance the most, see this note from HP on the
linux-kernel mailing list. (HP emphasizes
the test used an unmodified copy of Red Hat Advanced Server
2.1.)
According to tpc.org, the TPC/H
benchmark measures performance of a database in decision
support environments where it's not known which queries will be
executed. Pre-knowledge of the queries may not be used to optimize
the DBMS system. Consequently, query execution times can be very
long.
Results for four sizes of database are reported: 100GB, 300GB,
1TB, and 3TB. (GB = gigabyte = 10^9 bytes, TB = terrabyte = 10^12
bytes) Results from one database size should not be compared with
results from a different size.
Score: 1669
QphH (Queries per Hour, tpc/H)
$/Score: 169 US dollars per QphH
Database: Microsoft SQL Server 2000 Enterprise Edition
OS: Microsoft Windows 2000 Advanced Server
Hardware: Unisys
5085, with eight 700 MHz Pentium III Xeon CPUs (2MB L2 Cache)
sharing 4 GB RAM and seven RAID disk controllers. (That's a total
of 8 CPU's and 4 GB RAM.)
Score: 2733
QphH (Queries per Hour, tpc/H)
$/Score: 347 US dollars per QphH
Database: IBM
DB2 UDB EEE 7.2
OS: 2.4.3 Linux kernel with SGI's ProPack 1.5 kernel
patch
Hardware: Four SGI
1450 servers; each server has four 700 MHz Pentium III Xeon
CPUs (2MB L2 Cache) sharing 4 GB RAM and five Fibre Channel disk
controllers. (That's a total of 16 CPU's and 16 GB RAM.)
The SGI result is noteworthy not because it's particularly
great, but because it's the first audited TPC benchmark of any sort
reported for Linux.
The SAP "Sales & Distribution" benchmark tests the performance
of the SD module of SAP's very popular R/3 business suite.
Approximately 14% of the CPU cycles on the server are spent in the
database; the rest are spent in SAP's code. The benchmark is
defined at www.sap.com/solutions/technology/
and/or www.sap.com/solutions/technology/bench.htm.
That page also links to "R/3
Benchmarks - Results" (updated periodically), a complete
discussion of certified SAP benchmark results of various sorts.
Some info is also available second-hand from IdeasInternational.com
here.
See also http://www.sap.com/linux/.
Note that you can't compare the results of this benchmark run on
different releases of R/3. According to Siemens, release 4.0 is 30%
slower than release 3.x, and it looks like release 4.6 is somewhat
slower than release 4.0.
Also note that the SAP S&D benchmark comes in two flavors:
two-tier and three-tier. The two-tier tests involve a single
server; the three-tier tests are much larger.
Recent Two-Tier SAP S&D Benchmark Results for Intel-based
Servers
An up to date summary of the Two-Tier S&D benchmark results is
online at
www.sap.com/solutions/technology/benchmark/HTML/SD_2_tier_4x.htm,
but it has a layout problem with anything but IE; you may prefer to
view my cached copy
with the layout problem fixed.
Here is an excerpt showing all the Linux results, plus
benchmarks on comparable machines using Windows, plus the top
performers. Results are sorted first by R/3 version, then by
benchmark performance.
Recent Three-Tier SAP S&D Benchmark Results for Intel-based
Servers
An up to date summary of the Three-Tier S&D benchmark results
is online at
www.sap.com/solutions/technology/benchmark/HTML/SD_3_tier_4x.htm.
The three-tier results involve a system consisting of one
database server and many application servers; the Fujitsu/Siemens results are the only
ones on Linux that I know of.
Fujitsu/Siemens demonstrated a three-tier version of the SAP
S&D benchmark, where the central server was running Solaris and
the middle layer of servers were running Linux. See fsc.db4web.de/DB4Web/fsc/user/en/pm_einzeln.d4w?pm_id=319
and
www.sap.com/solutions/technology/benchmark/PDF/CERT2900.pdf for
more info.
Copyright 1999,2000,2001,2002,2003 Dan Kegel
[email protected]
Last updated: 14 Oct 2003
[Return to
www.kegel.com]
[Click to remove frames]