USB 3.2 Gen 2x2 State of the Ecosystem Review: Where Does 20Gbps USB Stand in 2020?
by Ganesh T S on October 5, 2020 10:30 AM EST- Posted in
- Storage
- SSDs
- Western Digital
- SanDisk
- ASMedia
- USB 3.2 Gen 2x2
Testbed Travails
Regular readers of our direct-attached storage reviews might have noticed that we upgrade our DAS testbed approximately every couple of years. It is important to keep the testbeds consistent across reviews of different storage devices so that the effect of the CPU, DRAM bandwidth, etc. remain the same across different runs of the benchmarks on different devices.
Our most recent update to the DAS testbed was in early 2019 - a move to the Hades Canyon NUC triggered by the inconsistent performance of the Alpine Ridge port in our Skylake DAS testbed. This inconsistency started showing up after we attempted a Thunderbolt firmware upgrade to enable eGFX functionality on the machine. Prior to the Skylake-based testbed, we were using a Haswell-based system. We opted against building a fresh DAS testbed with either the TRX40 or Z490 boards, as the imminent arrival of USB4 meant that we could again be forced to do an upgrade. A testbed change not only involves preparing a new machine - it also involves benchmarking older drives again (to the extent possible). We wanted to put this off as much as possible.
The Hades Canyon NUC, without PCIe expansion cards handling capability, was ruled out as a recipient of the Yottamaster C5. The initial plan was to use the Ghost Canyon NUC for this purpose after removing the discrete GPU. As we soon discovered, the PSU of the Ghost Canyon NUC doesn't come with SATA power cables, and an adapter cable wasn't handy. We moved on to our next option, the Skylake-based DAS testbed.
Yottamaster C5 in the GIGABYTE Z170X-UD5 TH ATX Motherboard
The installation of the Yottamaster C5 in the GIGABYTE Z170X-UD5 TH ATX board was uneventful - no drivers to install, as the ASMedia ASM3242 in the C5 uses Microsoft's XHCI drivers built into Windows 10 (May'20 update). As a first step, we took the SanDisk Extreme Portable SSD v2 (SuperSpeed USB 10Gbps) for which we already had recent benchmark numbers, and processed it with our test suite using the C5's Type-C port. The benchmarks with the ASM3242 host delivered better results than what was obtained with the Alpine Ridge port of our regular testbed - but this was to be expected, given the ASMedia chipsets at either end of the chain. After being satisfied with the shaping up of the updated testbed, we connected the SanDisk Extreme PRO Portable SSD v2 to the C5's port. Unfortunately, the drive kept connecting and disconnecting frequently (YouTube video link for screen capture). Sometimes, it stayed up for long enough to process a couple of iterations of one of the CrystalDiskMark workloads before disappearing (as shown in the screenshot below).
Initially, the suspicion was on the Plugable USBC-TKEY in the middle of the chain (kept in place for power measurement), but the behavior was the same with the direct connection too. The WD_BLACK P50 also exhibited the same problems. Based on the online reviews, this problem doesn't seem to be isolated to the Yottamaster C5 - ASM3242 cards from other vendors also appear to have similar issues.
Ruling out the Skylake-based testbed for the evaluation, we decided to attempt the installation of the card on our Haswell-based testbed. In this system, we no longer had the disconnection issue. Our test suite managed to run to completion on all the drives that we wanted to test.
Testing in Progress on the 'Best-Performing' USB 3.2 Gen 2x2 Testbed - (Core i7-4790 / Asus Z97-PRO Wi-Fi ac ATX / Corsair Air 540)
We did observe one hiccup in the set of tests - while processing the CrystalDiskMark 4K random reads and writes with 16 threads and a queue depth of 32, the system completely froze up for a good 30-60s before recovering (the effect can be seen in the CrystalDiskMark power consumption graphs in a later section). Our internal SSDs review editor, Billy, was able to reproduce the same with a Haswell-based system (using Core i7-4790K) at his end while using an Intel USB 3.0 port and a SuperSpeed 10Gbps enclosure using the JMicron JMS583 chipset. The problem was not reproducible with internal drives. Our inference is that the combination of high queue depth and thread count creates way too much driver overhead that the Haswell-based systems find difficult to handle.
As a final resort, we shifted back to the current DAS testbed, the Hades Canyon NUC. Taking the eGFX route, we connected the PowerColor Gaming Station to the Thunderbolt 3 port after removing its internal daughtercard responsible for all of its I/O ports. The PowerColor Gaming Station unofficially supports a SATA drive, which meant that its PSU has a spare SATA power cable. Using this, it was a breeze to get the Yottamaster C5 up and running in the eGPU enclosure.
Our test suite was processed on the WD_BLACK P50 and the SanDisk Extreme PRO Portable SSD v2 using multiple testbed configurations detailed above. We also processed the SanDisk Extreme Portable SSD v2 (SuperSpeed USB 10Gbps device) using the same ports for comparison purposes. The two SuperSpeed USB 20Gbps drives were also processed with our regular testbed to provide an idea of their performance when connected to regular Gen 2 (SuperSpeed USB 10Gbps) ports.
AnandTech DAS Testbed Configurations for USB 3.2 Gen 2x2 Testing | ||
Configuration | Suffix in Graphs | Notes |
Asus Z97-PRO Wi-Fi ac ATX Core i7-4790 Corsair Vengeance Pro CMY32GX3M4A2133C11 DDR3-2133 32 GB (4x 8GB) @ 11-11-11-27 Seagate 600 Pro 400 GB Yottamaster C5 USB 3.2 Gen 2x2 Expansion Card Corsair AX760i 760 W Corsair Air 540 |
[ASM3242] | N/A |
Intel NUC8i7HVK Core i7-8809G Crucial Technology Ballistix DDR4-2400 SODIMM 32GB (2x 16GB) @ 16-16-16-39 Intel Optane SSD 800p SSDPEK1W120GA Intel SSD 545s SSDSCKKW512G8 PowerColor Gaming Station Yottamaster C5 USB 3.2 Gen 2x2 Expansion Card |
[ASM3242 via JHL6540] | N/A |
Intel NUC8i7HVK Core i7-8809G Crucial Technology Ballistix DDR4-2400 SODIMM 32GB (2x 16GB) @ 16-16-16-39 Intel Optane SSD 800p SSDPEK1W120GA Intel SSD 545s SSDSCKKW512G8 |
[JHL6540] | Alpine Ridge Thunderbolt 3 port used in USB 3.1 Gen 2 mode |
GIGABYTE Z170X-UD5 TH ATX Core i5-6600K G.Skill Ripjaws 4 F4-2133C15-8GRR DDR4-2133 32 GB ( 4x 8GB) @ 15-15-15-35 Samsung SM951 MZVPV256 NVMe 256 GB Yottamaster C5 USB 3.2 Gen 2x2 Expansion Card Cooler Master V750 750 W Cooler Master HAF XB EVO |
[ASM3242 Skylake] | SanDisk Extreme Portable SSD v2 only |
The table above lists all the configurations that were tested, along with notes on the implications of the suffix seen in the graphs in the following sections.
81 Comments
View All Comments
Eric_WVGG - Tuesday, October 6, 2020 - link
On a related note, I would love to see you guys do some kind of investigation into why we're five years into this standard and one still cannot buy an actual USB-C hub (i.e. not a port replicator).hubick - Tuesday, October 6, 2020 - link
A 3.2 hub with gen 2 ports and a 2x2 uplink would be cool!I wanted a 10gbps / gen 2 hub and got the StarTech HB31C3A1CS, which at least has a USB-C gen 2 uplink and a single USB-C gen 2 port (plus type A ports). Don't know if you can do any better than that right now.
repoman27 - Tuesday, October 6, 2020 - link
Although it's still not exactly what you're looking for, I've tried (unsuccessfully) to get people to understand what a unicorn the IOGEAR GUH3C22P is. Link: https://www.iogear.com/product/GUH3C22PIt's a 5-port USB3 10Gbps hub with a tethered USB Type-C cable on the UFP (which supports up to 85W USB PD source), two (2!) downstream facing USB Type-C ports (one of which supports up to 100W USB PD sink), and two USB Type-A ports (one of which supports up to 7.5W USB BC).
serendip - Thursday, October 8, 2020 - link
No alt mode support like for DisplayPort. I haven't found a portable type-C hub that supports DisplayPort alt mode over downstream type-C ports although some desktop docks support it.stephenbrooks - Tuesday, October 6, 2020 - link
What if I want a USB 20Gbps port on the front of my computer?Can I get a USB-C front panel and somehow connect the cable internally to the PCI-E USB card?
abufrejoval - Tuesday, October 6, 2020 - link
I am building hyperconvergent clusters for fun and for work, the home-lab one out of silent/passive 32GB RAM, 1TB SATA-SSD J5005 Atoms, the next iteration most likely from 15Watt-TDP-NUCs, an i7-10700U with 64GB RAM, 1TB NVMe SSD in testing.Clusters need short-latency, high-bandwidth interconnects, Infiniband is a classic in data centers, but NUCs offer 1Gbit Ethernet pretty much exclusively, Intel struggling to do 2.5Gbit there while Thunderbolt and USB3/4 could do much better. Only they aren’t peer-to-peer and a TB 10Gbase-T adapter sets you back further than the NUC itself, while adding lots of latency and TCP/IP, while I want RDMA.
So could we please pause for a moment and think on how we can build fabrics out of USB-X? Thunderbolt/USB4 is already about PCIe lanes, but most likely with multi-root excluded to maintain market segmentation and reduce validation effort.
I hate how the industry keeps going to 90% of something really useful and then concentrating on 200% speed instead of creating real value.
repoman27 - Wednesday, October 7, 2020 - link
Uh, Thunderbolt and USB4 are explicitly designed to support host-to-host communications already. OS / software support can be a limiting factor, but the hardware is built for it.Existing off-the-shelf solutions:
https://www.dataonstorage.com/products-solutions/k...
https://www.gosymply.com/symplyworkspace
https://www.areca.com.tw/products/thunderbolt-8050...
http://www.accusys.com.tw/T-Share/
IP over Thunderbolt is also available:
https://thunderbolttechnology.net/sites/default/fi...™%20Networking%20Bridging%20and%20Routing%20Instructional%20White%20Paper.pdf
https://support.apple.com/guide/mac-help/ip-thunde...
repoman27 - Wednesday, October 7, 2020 - link
Stupid Intel URL with ™ symbol. Let's try that again:https://thunderbolttechnology.net/sites/default/fi...
abufrejoval - Wednesday, October 7, 2020 - link
Let me tell you: You just made my day! Or more likely one or two week-ends!Not being a Mac guy, I had completely ignored Thunderbolt for a long time and never learned that it supported networking natively. From the Intel docs it looks a bit similar to Mellanox VPI and host-chaining: I can use 100Gbit links there without any switch to link three machines in a kind of “token ring” manner for Ethernet (these are hybrid adapters that would also support Infiniband, but drivers support is only support Ethernet for host-chaining). Unfortunately the effective bandwidth is only around 35GByte/s for direct hops and slows to 16GByte/s once it has to pass through another host: Not as much of an upgrade over 10Gbase-T as you’d hope for: I never really got into testing latencies, which is where the Infiniband personality of those adapters should shine.
And that’s where with TB I am hoping for significant improvements over Ethernet apart from native 40Gbit/s speed: Just right for Gluster storage!
I also used to try to get Ethernet over fiber-channel working years ago, when they were throwing out 4Gbit adapters in the data center, but even if it was specified as a standard, Ethernet over fiber never got driver support and at the higher speeds the trend went the other direction.
So I’ll definitely try to make the direct connection over TB3 work: CentOS8 should have kernel support for TB networking and the best news is that it doesn’t have to wait for TB4, but should work with TB3, too.
I’ve just seen what seemed like an incredibly cheap 4-way TB switch recommended by Anton Shilov on the TomsHardware side of this enterprise, which unfortunately is only on pre-order for now (https://eshop.macsales.com/shop/owc-thunderbolt-hu... but supposed to support TB networking. Since the NUCs are single port TB3 only, that should still do the trick and be upgradable to TB4 for just around $150… The 5Gbit USB3 Aquantia NIC wasn’t much cheaper and even the 2.5Gbit USB3 NIC are still around $40.
Exciting, exciting all that: Thank you very much for those links!
abufrejoval - Wednesday, October 7, 2020 - link
...except... I don't think that "switch" will be supporting multiple masters, same as USB.If it did, Intel would have shot themselves in the foot: 40Gbit networking on NUCs and laptops with little more than passive ables, that's almost as bad as adding a system management mode on 80486L and finding that it can be abused to implement a hypervisor (Mendel and Diane started VMware with that trick).
Yet that's exactly what consumers really thirst for.