Radified Guide to SCSI - Boot from a SCSI Hard Drive

Conclusion Major Premise The Argument Argument-part2 Minor Premise
Seek/Access Tekram/Adaptec This vs that Misc Misc config
Misc config 2 Config/Compare Comparison Linkage Home

Onboard vs PCI adapter/controller card:

Some motherboards comes with onboard SCSI support. Typically mobo manufacturers put only the latest/greatest SCSI controllers on mobo's, cuz an onboard solution can't be upgraded. This is why I prefer a PCI card, even tho you tend to pay a little more for the PCI card solution, compared to an integrated (onboard) solution, cuz more PCB is involved with the PCI card solution. 

PCB costs money, and manufacturing costs money. Since the onboard solution is cheaper to manufacture, the manufacturer is able to pass these savings along to you. 

Personally, I'd rather pay a little more now & not have to worry about what to do when it comes time to upgrade my mobo. In fact, I have recently upgraded my (Abit) BH6 to a (Asus) CUSL2, but I kept the same SCSI card (Tekram DC390-U2W). If I'd had an onboard SCSI controller, I woulda had to buy another SCSI controller. But the onboard vs PCI card issue is mostly personal preference. Both sides make equally compelling cases. To me, It boils down to upgradability vs cost. I'll pay a little extra for upgradability.


The onboard solution weighs in better if you get a new mobo with a new chipset, with a new SCSI controller that just hit the market, cuz then the life of your integrated solution will be longer. For example, Ultra160 is currently the latest version/spec, with Ultra320 (320MB/s) somewhere in the pipeline. 

Might be prudent to note here that the standard 32-bit PCI bus is limited to a maximum of 133MB/s (at 33MHz), with realistic data x-fers more in the area of 100MB/s (taking overhead into account) .. so a good portion of the advantages of the U160 spec (160MB/s) are wasted (on standard/current mobo's with 32-bit PCI bus). I know plenty of smart people have opt'ed for the onboard solution for reasons of getting big performance at a bargain basement price. 

I have never used a mobo with an onboard SCSI solution, so I hesitate to talk much about them. But it's my understanding that using the onboard SCSI will mean you have one less PCI slot available .. that using your onboard SCSI will render one PCI slot useless. If this is not the case, pls lemme know. Again, this is more personal preference than anything else. I went with a PCI card solution and am happy with my decision. PlanetHardware looks more closely into this issue here.


Win98/SE/ME vs. Win2K/XP: Win2K makes better uses of SCSI's multitasking capabilities. You will still notice an impressive performance improvement with Win98/SE/ME, due to SCSI's blazingly-fast seek/access times. Simply because Win2K multitasks better is not a valid reason to claim that SCSI is not worth it for those running a system based upon the W9x kernel. 

It is not difficult to set up a dual-boot config, and use Win2K for the apps that take better advantage of it's multitasking/multithreaded capabilities. I have heard ppl say that SCSI is wasted on W98/SE/ME. It's not. Win9x-kernel-based OS'es still take advantage of its smoking access times, and are (still) multitasking-capable.

update 30mar2002. Seems that there are problems running Windows XP on a SCSI drive. See here. At this point, it is not recommended that you install Windows XP to a SCSI drive. Problems are with small block writes. I wrote to LSI Logic, who makes the chipset that my SCSI card is based upon. They are looking into it, and calling it "an O/S issue".


IDE RAID vs LVD SCSI: Let me preface by saying I have never configured a RAID array - IDE, SCSI or otherwise. But I have researched the topic well. RAID does nothing to improve seek/latency/access times of the drive(s) in the array. In fact, I heard that an IDE RAID controller actually *adds* 1-to-3ms extra to access time .. altho I have not verified this. RAID's big advantage comes in the area of STRs - sustained transfer rates. 

High STRs are great for things like audio & video files, but they're not good for things that depend on access times - like running an operating system, applications, or swap/paging file. So I'm not saying that IDE RAID is not good. I'm saying that it is not as good as SCSI for running an OS, apps & swap/page file. 

IDE RAID is not a multi-tasking interface (SCSI is). Also, IDE RAID has lower reliability factor than SCSI, for two reasons: 

1. SCSI drives are built better & more reliable than IDE drives (typical 5 yrs warranty vs 1-3 yr warranty). 
2. a 2-drive (RAID-0) stripe has twice the chance of failure as a single IDE drive.

It doesn't matter how high of a transfer rate an array can sustain if it spends all its time *seeking*.


Games: Games do not use the SCSI interface to it's fullest, cuz, once the game is loaded initially, the only time a SCSI drive comes into play is when another level/map loads. During actually game play, the drive sits idle. Yet I know a surprising number of hard-core gamers who have all their favorite games on a SCSI hard drive, and would have it no other way. Game play is more affected by your graphics card, your CPU, your RAM, your monitor, and your network connection.


Low-level format: I have heard that low-level formatting should only be done as last resort. I have never had to low level format any drive.


80-pin SCA vs 68-pin connectors: Some places (most notably Egghead) offer attractive deals on 80-pin SCA drives. SCSA (I think) stands for Single Connector Architecture. Basically SCA incorporates the 4-pin power connector into the standard 68-pin connector. This makes it easy to swap out drives on monster, rack-mounted server/drive farms. I do not recommend 80-pin SCA drives, cuz you will have to get an 80-to-68-pin adapter. 

Not only can these be expensive (I've heard some folks paying $40), but you typically want to avoid any kind of adapter. Adapters simply add one one part to the equation that can go wrong. If you find a (seemingly) killer deal on an deal on an 80-pin SCA drive, make sure you figure in the cost (+ shipping) of the adapter. Some 80-pin SCA drives come with adapters, but most don't. I would would gladly pay a little extra to not have to use an adapter. But I have friends who have them & use them with no probs, so use your own good judgment.


7200rpm vs 10Krpm drives: For reasons of performance, I do not recommend 7200rpm SCSI drives - unless you simply want to learn about how to configure a SCSI system. If you cannot afford a 10Krpm SCSI drive, I recommend waiting until you can. A 7200rpm SCSI drive will still provide you with SCSI's multitasking/multi-threaded capabilities, but it's seek/access & STR perf is not enuf (IMHO) to justify the expense. 

I've never purchased or used a 7200rpm SCSI drive, but have talked to people who have. Most are quite pleased with their 7200rpm SCSI drives, and say that they're definitely faster than 7200rpm IDE/ATA drives - at running their OS'es & apps .. even tho a current generation IDE/ATA drive may have a higher STR (sustained transfer rate).


10Krpm vs 15Krpm drives: I've never used a 15Krpm drive, but if you look at the performance numbers (benchmarks), you'll see that the difference between 10K & 15K is not as large as the difference between 10K & 7200rpm. Also the price jump from 10K to 15K is larger than the price jump from 7200rpm to 10K.

If you can afford a 15Krpm drive, it's definitely the way to go. But if you can't, 10Krpm will offer much of the performance benefits (especially compared to IDE drives) at a fraction of the cost.


Ultra160 vs Ultra2Wide: Ultra160 is a subset of the Ultra3 protocol. Only 3 of the 5 features in U3W have been implemented in U160. Packetization and QAS (quick arbitration) have been left out. Ultra160 supports bus speeds up to 160MB/s. Ultra320 is in the pipe - coming, but not here yet. 

The problem is that most current motherboards do not have 64-bit PCI slots, and you need 64-bit PCI to take full advantage of the U160 protocol. Most current mobo's have 32-bit PCI busses, which max out at 133MB/s. However 100MB/s is a more realistic number, taking into account things like overhead & housekeeping. Note that this number (100MB/s) includes data x-fer'ed from all PCI - i.e. sound card, network card, SCSI card, etc. - everything on your PCI bus. 

Current reigning fastest drive on the planet (FDOP) - Cheetah X15-36LP - can barely sustain a data x-fer rate of even 60MB/s. So you can see that any bandwidth above U2W/LVD speeds (80MB/s) is likely to be wasted. So, the argument goes, why pay for more speed than you can use?

Two answers.

  1. People like running their hard drives at same protocol as their controllers. Makes them feel better, like there's less chance of problems. Since most new drives come in U160 flavor, they like their SCSI adapter U160. In theory, it shouldn't matter. Mixing & matching protocols should work without a hitch. The SCSI protocols are specifically designed to be backward compatible.

  2. Longevity. An U160 card will last longer then a U2W card.

Note that you can exceed the limitations of the PCI bus (133MB/s theoretical, 100-110MB/s realistic) with an U160 controller and U160 devices is the data being transferred doesn't have to go out to the PCI bus. In other words, if the data being transferred from one SCSI device to another, you can exceed 133MB/s. Obviously, you need more than one SCSI device to achieve this.

But it's difficult to justify U160 from a performance standpoint, cuz you pay for performance you can't use (yet). If I was buying today, I'd get the U160 card anyway. I prefer to get a piece of hardware, learn it's quirks, and keep it for as long as I can. I like the Tekram DC390-U3W controller (~$185).

Next -> [SCSI Guide - Miscellaneous Info]

Previous -> [SCSI Guide - Tekram vs Adaptec]