Intel
- P4501: lower power PCIe 3.0 x 4
- DC P4510: PCIe 3.0 x 4, seems to have extremely major issues
- D7-P5510: PCIe 4.0 x 4
Optane
-
P1600X: looks like a great ZIL device, m.22110
-
905P: m.22110
-
DC P5800X: high idle power consumption, over 15w under load
Samsung
- PM883: sata, energy efficient
U.2 adaptors
- Delock adaptors seem reliable for PCIe 4.0, but are expensive though. Specifically, the 90169
- reichelt ships internationally
- LRNV94NF: PCIe 4.0 x16 to 4 U.2, can be found on Newegg
- 10Gtek PCIe 3.0 x16 to 4 Ports SFF-8639
MCIO
Server motherboards can have MCIO ports which carry either 8 lanes of PCIe 5.0 or 8 Sata, depends on which cables are connected to the port. MCIO only carries data and not power.
Someone had a good experience with c-pane
- LetLinkSo PCIe 5.0 MCIO x8 to 2 x SFF-8639 Cable for U.2 NVMe SSD with 15Pin Power, 2.1ft(65 cm): has 3.3v power used by some drives, unlike most other cables. Works with Optane 905P
- MCIO SFF-TA-1016 8i to 2x SFF-8639 U.2/U.3 cable - PCIe gen4
- MCIO PCIe gen5 Host Adapter x16 -RETIMER
- M.2 M-key PCIe 5.0 with ReDriver to MCIO 38P: M.2 to MCIO 8I
- Gen 5 MCIO x4(SFF-TA-1016) 38P to MCIO x4 (SFF-TA-1016) 38P, ultra low loss 29AWG wire: carries MCIO
NVMe Namespaces
Most enterprise SSDs support NVMe namespaces which allows the drive’s capacity to be split into logically separate partitions. The OS sees multiple namespaces on a drive as separate devices.
Could be useful to divide a larger SSD into smaller chunks to use some of it for a slog, some for storage, OS etc. They can also apparently be passed into VMs as well, though I’m checking if it requires PCIe passthrough and whether that works on each namespace.