Original Link: https://www.anandtech.com/show/7074/strontium-hawk-240gb-review



Strontium is probably a new acquaintance for many of our readers, but their story may be familiar. Founded in 2002 in Singapore, Strontium is one of the many memory manufacturers who have turned to SSDs to expand their product portfolio and increase revenues. Dozens of companies that have done the same: OCZ, Kingston, Mushkin, Crucial/Micron, ADATA, and Corsair to name a few. It's logical because the same companies that manufacture DRAM often make NAND too, so the contacts and channels are already in place. When you add the fact that all the controller/firmware technology can be licensed from SandForce (or Marvell if you're willing to invest in firmware), you don't actually need much in-house engineering to build an SSD.

What makes Strontium special is the fact that they actually buy the whole SSD from a third party and simply rebrand it. In case of the Hawk, Strontium sources the SSD from Toshiba, although they've at least had SK Hynix as their supplier as well in the past. Now, this isn't anything unheard of; readers who have followed the SSD scene for longer may remember Kingston's SSDNow M series SSDs, which was a rebranded Intel X25-M G2. However, to date I've not come by a company whose business model relies completely on rebranding SSDs. Strontium told me that after long-term evaluation they decided to go with a proven, high-quality product and focus their resources on other aspects such as validation and marketing. Strontium didn't go into detail about their validation methods but they told me that they verify that the products they ship perform to the spec provided by the actual manufacturer (including reliability tests). 

There's even the original Toshiba sticker on the back of the drive

Due to Strontium's headquarters location, their main market is still in Asia. Strontium does have an office in the US but unfortunately none of the bigger retailers stock their SSDs currently. They do have their own online store that ships to US, though.

Strontium Hawk Specifications
Capacities 120GB & 240GB
Controller Toshiba TC58NC5HA9GST
NAND Toshiba 19nm Toggle-Mode 2.0 MLC
Sequential Read Up to 534MB/s
Sequential Write Up to 482MB/s
4KB Random Read Up to 90K IOPS
4KB Random Write Up to 35K IOPS
Warranty Three years

The Hawk is available in capacities of 120GB and 240GB but as you can see in the Toshiba sticker above, the drive is actually 256GB. What's weird is that the drive also shows up as a 256GB drive (or 238.5GiB as it's displayed in Windows). At first I thought there was a problem with my sample as Strontium had switched suppliers (I'll get to this soon), but Strontium told me that they advertise the drive based on the capacity available to the end-user. In other words, Strontium markets it as 240GB because it shows up as a 238.5GiB drive in Windows due to the difference between Giga and Gibibyte. I wouldn't have a problem if it was Intel or some other major SSD OEM doing this as they could change the way GB is defined in the storage industry, but I don't see the point for a small-ish company like Strontium to do that. The only thing they will do is confuse buyers because everyone else is defining GB as 1000^3 bytes. 

The other thing I'm not happy about is that Strontium changed the supplier earlier this year but kept the product name the same. It's normal for SSD OEMs to use multiple NAND suppliers and that's fine, but Strontium changed the whole SSD including the controller. The original Hawk was manufactured by SK Hynix and it was based on SandForce's SF-2281 controller and 26nm Hynix NAND (see e.g. Strontium's PR and Legit Reviews' review). My sample, on the other hand, is made by Toshiba and it has a Toshiba-Marvell controller coupled with 19nm Toshiba NAND.

Such a radical change should definitely require a change in the series name too since we are dealing with two very different SSDs. Strontium even has another SSD series called Python that uses an SF-2281 controller, so I fail to see why Strontium didn't for example name the Toshiba drive as Python and keep Hawk for the SF-2281 based drive. As far as I know, we are the first site to review the new version of the Hawk, meaning that potential buyers have had no idea that their drive might not be SF-2281 based like all the reviews are claiming. But let's see how the numbers line up first.



Inside The Drive

All major chips are covered by pink thermal pads for heat dissipation. The interesting thing about the controller is that while it's based on Marvell silicon, it does not require any external DRAM. One of the reasons why SSD OEMs have preferred SandForce controllers is that they require no DRAM, which helps to drive costs down. The DRAM is mainly needed for caching the NAND mapping table, although some manufacturers also use it to cache writes, but it's certainly possible to build a design that doesn't require any external DRAM. Especially if you have access to the silicon (like I suspect Toshiba does; this doesn't seem like an off-the-shelf Marvell controller), you can make sure the controller has sufficient built-in caches to eliminate the need for DRAM. What's odd is that there is, however, an empty socket for DRAM next to the controller. It seems that either some models come with additional DRAM (bigger capacities may require a larger NAND mapping table) or the PCB design was simply finalized before the controller/firmware. I asked Strontium about the lack of DRAM but unfortunately they didn't have any knowledge of its purpose as it's purely a Toshiba/Marvell design. 

As for the NAND, there are eight packages and each package is 32GiB (4x8GiB) in size. 

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit



Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area.  If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Strontium Hawk 256GB Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB
Default
25% Spare Area

Performance consistency isn't very good. It's not horrible but compared to the best consumer drives, it leaves a lot to be desired. Fortunately there are no major dips in performance as the IOPS is over 1000 in all of our data points, so users shouldn't experience any significant slowdowns. Increasing the over-provisioning definitely helps but the IOPS is still not very consistent: It's going between 10K and ~2K IOPS, while for desirable drives the graph is very linear with low amplitude.

  Strontium Hawk 256GB Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB
Default
25% Spare Area

 

  Strontium Hawk 256GB Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB
Default
25% Spare Area

 



AnandTech Storage Bench 2013

When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst case scenario performance after prolonged random IO.

For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans--not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test--we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
  The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12 - 13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst case IO performance and consistency matters just as much to client users as it does to enterprise users.

We're reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AnandTech Storage Bench 2013 - The Destroyer

While IO consistency was lacking, the Hawk does surprisingly well in our new Storage Bench 2013. It's definitely no challenger to SanDisk Extreme II or Seagate 600, but the excellent sequential performance makes up for the lack of random write performance and consistency. A look at the average service time shows that the IO consistency is what is holding the Hawk back. 

AnandTech Storage Bench 2013 - The Destroyer



Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read (4K Aligned)

Desktop Iometer - 4KB Random Write (4K Aligned) - 8GB LBA Space

As I mentioned earlier, random performance is not Hawk's biggest strength. 

Desktop Iometer - 4KB Random Write (8GB LBA Space QD=32)

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read (4K Aligned)

Desktop Iometer - 128KB Sequential Write (4K Aligned)

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance - AS-SSD

Incompressible Sequential Write Performance - AS-SSD



Performance vs. Transfer Size

ATTO is a useful tool for quickly measuring the impact of transfer size on performance. You can get the complete data set in BenchBoth read and write performance are fairly good at all transfer sizes. Read performance could be slightly better as between IO sizes of 8KB and 32KB, the fastest drives are around ~100MB/s faster but the Hawk is certainly not the worst drive here.

Click to enlarge



AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally we kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system. Later, however, we created what we refer to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. This represents the load you'd put on a drive after nearly two weeks of constant usage. And it takes a long time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011—Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading, and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II and WoW are both a part of the test), as well as general use stuff (application installing, virus scanning). We included a large amount of email downloading, document creation, and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential; the rest ranges from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result we're going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time we'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, we will also break out performance into reads, writes, and combined. The reason we do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. It has lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback, as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

We don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea. The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.

AnandTech Storage Bench 2011—Heavy Workload

We'll start out by looking at average data rate throughout our heavy workload test:

Heavy Workload 2011 - Average Data Rate

The high sequential throughput and peak performance is highlighted in our Heavy suite.

Heavy Workload 2011 - Average Read Speed

Heavy Workload 2011 - Average Write Speed

Heavy Workload 2011 - Disk Busy Time

Heavy Workload 2011 - Disk Busy Time (Reads)

Heavy Workload 2011 - Disk Busy Time (Writes)



AnandTech Storage Bench 2011—Light Workload

Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Light Workload 2011 - Average Read Speed

Light Workload 2011 - Average Write Speed

Light Workload 2011 - Disk Busy Time

Light Workload 2011 - Disk Busy Time (Reads)

Light Workload 2011 - Disk Busy Time (Writes)



TRIM Performance

Our IO consistency tests give a good idea of the drive's garbage collection but for TRIM testing we still rely on the good old HD Tach method. To begin the test, I ran HD Tach on a secure erased drive to get the baseline performance:

Next I tortured the drive for 30 minutes with 4KB random writes (QD=32, 100% LBA) and TRIM'ed the drive after the torture:

TRIM works as expected, although there is some weird behavior at the 50% mark. It seems like the Hawk features a performance mode similar to OCZ's Vector and Vertex 4. The idea is rather simple: you only write to half of the pages until they are full. The gain is lower latencies as your drive is basically operating with 50% over-provisioning, but the downside is that the pages need to be reorganized once more than half of the drive has been filled. In the graph the write speed drops to ~100MB/s but this is because the drive is organizing the pages while it's being written to, which results in quite bad performance. After I had run HD Tach, I let the drive idle for about 10 minutes and IOmeter showed that the write speed had returned to +300MB/s. However, it did drop to ~100MB/s quite quickly so the reorganizing might take a while (with Vector it only took a few minutes) and I would advice to let the drive idle for an hour or so if you fill more than half of it.



Power Consumption

Idle power consumption is quite average but under load the Hawk is one of the most power efficient SSDs we have tested. Unfortunately I don't have figures for slumber power (i.e. HIPM+DIPM enabled) because Intel's desktop platforms are not supported, meaning that I would need a modified laptop for this particular test.

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write



Final Words

My feelings are two-folded. The drive itself is actually quite good in terms of performance. Performance consistency could be better but on the other hand the pricing justifies the lack in performance compared to the fastest drives. Toshiba is also known for high reliability as they've mainly focused on the enterprise and OEM markets, both of which demand extreme reliability (it's not a coincidence that Apple has chosen Toshiba as one of their SSD suppliers). Normally I'm skeptical about long-term reliability when reviewing a not-so-well-known brand SSD but since this is a 100% Toshiba drive, I'm very confident about the reliability.

NewEgg Price Comparison (6/19/2013)
  120/128GB 240/256GB
Strontium Hawk $100 $185
Samsung SSD 840 $100 $170
Samsung SSD 840 Pro $135 $250
Crucial M500 $130 $200
SanDisk Extreme II $130 $230
Seagate 600 $110 $210
OCZ Vector $145 $260

As far as the pricing goes, the closest competitor is Samsung's SSD 840, which is slightly less expensive on the 240/256GB model. For lighter workloads, the 840 comes out ahead, but anything more demanding tends to favor the Toshiba/Strontium drive. Strontium also sells their drive for quite a bit less than the Toshiba 256GB model, so that's another point in their favor.

On the other hand, I'm not very happy with how Strontium handled the switch of suppliers. It's rather obvious that if your whole product changes, it should also be renamed in the process. There are numerous ways Strontium could have named the "new" Hawk (Hawk II, Hawk Pro, Hawk Extreme come to mind), but they didn't and I can't understand why. Right now this is not much of a problem anymore as currently Strontium is only shipping the new Toshiba-based Hawk, but this certainly doesn't raise much trust in the company with the future in mind.

Fortunately the Toshiba drive is faster than SandForce in most tests, but actions like these always raise the question: what if Strontium changes the supplier again and the new drive ends up being worse than the old one? The SSD market is mostly dominated by the big brands (Samsung, Intel, SanDisk, Crucial, Toshiba, etc.), so the smaller players can't afford to take many image stains like this. I hope this was just a one-time mistake for Strontium and next time they'll pay closer attention to the product naming.

Log in

Don't have an account? Sign up now