When we heard that Intel was entering the storage market with its own SSD drives, we were intrigued. When we first got specs and model name for Intel’s mainstream notebook drive, the X25-M , at IDF we were drooling in anticipation of its touted 250MBps read rate, more than twice its nearest competitor. Then, when we reviewed the X25-M for ourselves and declared it the fastest notebook drive of all-time, we had to interview someone at Intel and find out how they did it and what other high-speed tricks they have hiding in a clean room somewhere. We got to speak with Intel Product Line Manager for SSDs Kishore Rao, who revealed that:
Why did Intel decide to go into the SSD market? Basically,I think it’s really simple. If you look at PC and CPU performance over the last little over a decade, clearly CPUs have done stellar things in terms of increasing performance. If you look at single-core CPUs, definitely 65x the performance over 12 years and if you look at multicore CPUs then that’s almost 175x, On the other hand hard drives have been doing great in capacity, but performance really hasn’t improved in the storage subsystem. You know it’s about 1.3-1.5x increase in performance [in the past 13 years] so what this does is create this huge bottleneck for the PC and for the user and you see that with the hourglass or the hard drive light going and you sit there waiting for it to get done and that I/O bottleneck is slowing down systems. Essentially Intel wanted to enter the NAND business obviously to make money in it, but also to ensure that we eliminate that I/O bottleneck. From that standpoint, Intel has been working from the ground up to develop an SSD which solves the I/O bottleneck problem. Why did you choose to go with MLC memory for your mainstream SSDs? If you look at consumers and OEMs and what they want, they want to be able to replace hard drive capacities. Today, my computer has an 80GB hard drive. Now it has an SSD in there. But from that standpoint, if you look at the price per Gigabyte, MLC is a great fit because it’s twice the amount of storage in the same amount of silicon space so MLC has an inherent price advantage from an OEM standpoint. We believe a lot of competing SSDs had to go with SLC, because they couldn’t deliver the performance. However, I think we solved the problem of giving the performance and reliability needed to remove the I/O bottleneck with MLC and therefore we chose to go with MLC. We also have an SLC-based X-25E extreme edition SSD. One of the questions that a lot of people have when they first learn about MLC memory vs SLC is about the write endurance. For most MLC drives, it’s supposed to be 10,000 write cycles. What is the M series rated for? I’ll answer that in two ways, from the standpoint of NAND itself and from an SSD standpoint. So if you look at the NAND, the NAND we use has100,000 cycles at the actual block level of the M series Flash. At the system level though, you don’t necessarily need that kind of flash endurance per block and the reason for that is if you have the architecture which Intel has which is 10 channel architecture and the availability of the multiple NAND packages and die, you’re able to spread across the writes and therefore you do not need that NAND endurance that is required at the block level on a single flash device. So essentially, at a system level, we’re able to have a five year useful life if you assume — let’s say an average user uses 4 to 5GB a day, but we’ve design the product to accept a margin of 100GB per day for five years and still not really run out the flash so the cycling endurance is important, but the firmware and the controller architecture we use actually work around those issues. Who do you expect to buy the new M series drives? Who’s the target audience? We certainly feel there’s a segment of high-end consumers, gamers for example and power users (and from a corporate standpoint you want the ruggedness of SSD as well as the performance and battery saving) who will absolutely adopt this drive. In addition, we’re seeing two other markets. Corporate IT – the IT shops are definitely looking at SSD from a user productivity and material savings standpoint. From an OEM perspective, at our press event at IDF we had both Lenovo and HP there live showing some demos. So we have OEMs. We believe the targets for the M series are high-end consumers or corporate IT, that class of notebooks. Are there any other vendors who are going to be using this in there notebooks? We obviously work with a wide variety of worldwide OEMs as well as the channel. We have HP and Lenovo who talked about it. In addition, we had a gaming company called Falcon Northwest. They were part of our event as well and they publicly stated that as soon as it’s available, they’ll start shipping Intel SSDs. We had a very cool demo which showed how the M series SSD actually has 2X the performance of two Raptor drives. How did Intel manage to achieve these kinds of read speeds? I’ve gotten that question a lot from a lot of people. First of all, from our OEMs who said “you know are you kidding? How can you almost saturate the SATA 2 bus?” So several things. If you look at the question of why Intel is in the NAND business, we wanted to solve the I/O problem so we started from the ground up. Our goal from day one was to solve that bottleneck so we designed a controller from the ground up. We had our own firmware team and our own Intel controller and we — by we I mean the storage technology group at Intel — have done a lot of research in the storage space and been instrumental in the Serial ATA standard and we measured drives and saw what kind of reads are from drives and what kind of writes are from drives and most often with applications the reads are essentially 4K bytes and especially the hard drive gets hit with random reads and random writes and you know that’s it from a latency standpoint. So the design was done from the ground up to optimize for random reads and random writes, especially at 4K bytes, all the way up to 16K bytes. Competitive SSDs seem to focus on sequential writes and, while sequential writes are important and, by the way, we have up to 250MBps sequential reads — that’s close to saturating the SATA-2 bus with overhead. From that standpoint sequential reads are important, but random is really where the user’s responsiveness to the system can benefit. The number one thing is that we optimized for reads and writes at 4K random. In addition, most of the controllers which are out there today – and we definitely researched the controller space, before we decided to design our own controller, they have an issue with what we call write amplification which is the ratio of host writes to NAND writes. The host wants to write, say, a megabyte of data to the SSD, but what needs to happen because of the way NAND works is you may have to erase a complete block or an area just to replace a couple of pages of data and if you write a megabyte, we’ve seen up to 20 megabytes of data written to the NAND and so the controllers are servicing the NAND writes more than they’re servicing the host. So that write amplification if it’s a low factor — in Intel SSDs it’s really low; it’s a little over 1 — that enables the SSD to service the host and deliver the performance. So there’s an optimization for random reads and random writes and the second thing is that low write amplification enables faster throughput. What tasks have you seen the most gain from with this drive? We have done some work in that space. Obviously, your hard drive slows down when it has multiple accesses of applications requesting access from the drive. So essentially the fastest response and the value this gives is in multitasking. For example, if you have a high-end gamer with Vista and he’s running Windows Defender and, at the same time, he gets a patch for a game, let’s say a World of Warcraft patch comes in. That patch’s installation will happen 140% faster than with a 5,400rpm hard drive. So definitely when you’re multitasking, when you have high I/O, but also if you have high compute and high I/O-intensive tasks at the same time then we’ve also found that the faster I/O will enable a CPU to actually get done faster. So, in a notebook system, the ability to get the workload done faster actually saves you battery life, because the system goes to a lower power state faster. It’s multitasking and also when there’s high CPU and high I/O intensive tasks. Windows defender itself can also scan the system 40% faster when it’s SSD. The pricing of the M series is very competitive when compared to any SLC drive that’s on the market, but we noticed that it’s higher than low-cost drives like the OCZ Core Series. Where does Intel fit into the marketplace? The price I think we announced in quantities of up to 1,000 is $595 and obviously that’s our highest price which we’ll ever sell this for. Our OEMS as you might imagine get a much better deal, because they have larger volume and quite frankly the up to 1000 quantiies are purchased by very few people in distribution. So our actual pricing going out to OEMs and our customers is much lower than that. The fit from a pricing standpoint is obviously there’s SSDs offered to value, including the things like you’ve discussed here, and one thing is that NAND cost and pricing falls historically 40% a year. We see adoption of the X25-M and X18-M in corporate IT notebooks. We’ve had some studies within Intel and some data we presented which shows a $400 savings over the life of the PC, just by adopting the SSDs. In addition to the user productivity which is estimated at about 2 weeks of user time over the life of the PC; I don’t know how you can put a dollar amount on two weeks of time. If you look at SSDs, they will definitely be higher priced than hard drives of the same capacity. In dollars per gigabyte, the hard drives will always have the advantage. But we feel with our 34nm product coming out in 2009, in the same capacities we’ll be able to provide a mainstream price point for consumers. Where do you see the price of SSDs going? At what point do you think everyone will replace their hard drives with SSDs? There’s two users. One is in a corporate environment and some of the smaller form factor PCs where you don’t actually need an entire 320GB hard drive in there. Users use probably 80 to 100GB of data for that kind of use. For those kind of notebooks and subnotebooks, I think SSDs will penetrate them and we expect about 50% penetration of that by 2011-2012 timeframe. The model there will be that if users want additional data, they’ll always plug in an external hard drive, via USB or in a docking station or it will be network storage. I don’t think there will ever be a place where we can say for all categories of computer that SSDs replace hard drives. There will always be this memory continuum where you show storage. As NAND prices go lower, they creep more into the hard drive space. I think what is more interesting from what we’ve seen is there’s certain types of hard drives we think will be squeezed out of the market. If you look at the 15K or 20K rpm Raptor [Editor’s note the Raptor is only 10,000 rpm but some SCSI drives go up to 20,000], I think they’ll get squeezed out because you have the IOPS and the performance delivered by SSDs and consumers who want additional storage space will be using the 5,400 or 7,200rpm drives. Is it going to be distributed as white box OEM or in a retail box? At this time, it will only be the white box. We think that there are mostly resellers that are aggregators, they put together systems. We’re obviously evaluating plans for a retail product, but not at this time. There’s definite traction out there and need for end consumers to buy an Intel SSD and we fully expect they can actually purchase that through distribution. It will be mostly high-end gaming folks who do that.