我是靠谱客的博主 可爱长颈鹿,最近开发中收集的这篇文章主要介绍【SSD测试专题四】性能之QOS,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

QOS

1. 定义

QoS(Quality of Service)即服务质量。在有限的带宽资源下,QoS为各种业务分配带宽,为业务提供端到端的服务质量保证,最初是用在网络质量上的,随着SSD的发展以及云上业务的兴起,QOS也逐渐成为企业级SSD所需要衡量的一个重要指标,在SNIA 关于SSD性能的测试篇幅中,部分测试项均会需要QOS的值来确认测试结果。在消费类SSD中,通常不关注这方面,单盘测试场景居多,且个人有限业务压力,不一定能够体现出SSD各厂家对于此特定指标的调试优化的含金量。

2. 如何测试

FIO 是SSD测试最常用的性能测试工具之一,如以下的测试结果如何读取 QOS的值来进行评估SSD性能呢。

Starting 4 processes
Jobs: 3 (f=3): [f(3),_(1)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=4): err= 0: pid=8949: Fri Feb 10 16:44:03 2023
   read: IOPS=196k, BW=767MiB/s (804MB/s)(44.9GiB/60002msec)
    clat (nsec): min=730, max=2233.1k, avg=17692.44, stdev=19117.95
     lat (nsec): min=770, max=2233.1k, avg=17743.47, stdev=19118.30
    clat percentiles (usec):
     |  1.00th=[   11],  5.00th=[   11], 10.00th=[   12], 20.00th=[   12],
     | 30.00th=[   12], 40.00th=[   13], 50.00th=[   13], 60.00th=[   13],
     | 70.00th=[   14], 80.00th=[   15], 90.00th=[   19], 95.00th=[   69],
     | 99.00th=[   84], 99.50th=[   92], 99.90th=[  229], 99.95th=[  285],
     | 99.99th=[  343]
   bw (  KiB/s): min=149608, max=220904, per=24.99%, avg=196216.43, stdev=13371.35, samples=476
   iops        : min=37402, max=55226, avg=49054.11, stdev=3342.83, samples=476
  write: IOPS=84.1k, BW=328MiB/s (344MB/s)(19.2GiB/60002msec)
    clat (nsec): min=1060, max=2813.7k, avg=4094.14, stdev=2677.22
     lat (nsec): min=1130, max=2813.8k, avg=4166.65, stdev=2678.90
    clat percentiles (nsec):
     |  1.00th=[ 2024],  5.00th=[ 2320], 10.00th=[ 2512], 20.00th=[ 2800],
     | 30.00th=[ 3056], 40.00th=[ 3376], 50.00th=[ 3728], 60.00th=[ 4192],
     | 70.00th=[ 4640], 80.00th=[ 5280], 90.00th=[ 6176], 95.00th=[ 6944],
     | 99.00th=[ 8640], 99.50th=[ 9408], 99.90th=[11328], 99.95th=[12224],
     | 99.99th=[14784]
   bw (  KiB/s): min=64624, max=94736, per=24.99%, avg=84066.78, stdev=5844.81, samples=476
   iops        : min=16156, max=23684, avg=21016.69, stdev=1461.20, samples=476
  lat (nsec)   : 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.37%, 4=16.67%, 10=13.31%, 20=63.58%, 50=0.87%
  lat (usec)   : 100=4.94%, 250=0.20%, 500=0.06%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=6.82%, sys=39.33%, ctx=11748635, majf=0, minf=54
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=11776390,5045660,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=767MiB/s (804MB/s), 767MiB/s-767MiB/s (804MB/s-804MB/s), io=44.9GiB (48.2GB), run=60002-60002msec
  WRITE: bw=328MiB/s (344MB/s), 328MiB/s-328MiB/s (344MB/s-344MB/s), io=19.2GiB (20.7GB), run=60002-60002msec

Disk stats (read/write):
  nvme0n1: ios=11716159/4836960, merge=0/2283, ticks=149471/161606, in_queue=0, util=99.81%


如上是一块SSD 测试randrw Read 70%的fio测试结果,在fio的直接输出打印中,我们可以看到read和write 分开的clat percentiles 部分,此刻我们可以看到的有1个9到4个9的QOS分布,当然这些还不够精确,企业级SSD中有需要精确到5个9的QOS分布。因此在fio 参数中,提供了这样一个参数

--percentile_list=1:5:10:20:30:40:50:60:70:80:90:95:99:99.5:99.9:99.99:99.999:99.9999:99.99999:99.999999

以此加入测试得到的结果

Starting 4 processes
Jobs: 3 (f=3): [_(1),f(3)][100.0%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 00m:00s]
mytest: (groupid=0, jobs=4): err= 0: pid=10503: Fri Feb 10 16:48:18 2023
   read: IOPS=197k, BW=769MiB/s (806MB/s)(45.0GiB/60002msec)
    clat (nsec): min=680, max=31738k, avg=17649.41, stdev=23135.14
     lat (nsec): min=730, max=31738k, avg=17699.65, stdev=23135.37
    clat percentiles (usec):
     |  1.000000th=[   11],  5.000000th=[   11], 10.000000th=[   12],
     | 20.000000th=[   12], 30.000000th=[   12], 40.000000th=[   13],
     | 50.000000th=[   13], 60.000000th=[   13], 70.000000th=[   14],
     | 80.000000th=[   15], 90.000000th=[   19], 95.000000th=[   69],
     | 99.000000th=[   84], 99.500000th=[   93], 99.900000th=[  229],
     | 99.990000th=[  326], 99.999000th=[  490], 99.999900th=[ 1336],
     | 99.999990th=[31589], 99.999999th=[31851]
   bw (  KiB/s): min=149280, max=225192, per=24.99%, avg=196743.89, stdev=13674.15, samples=477
   iops        : min=37320, max=56298, avg=49185.96, stdev=3418.55, samples=477
  write: IOPS=84.3k, BW=329MiB/s (345MB/s)(19.3GiB/60002msec)
    clat (nsec): min=1220, max=31584k, avg=4083.19, stdev=19949.69
     lat (nsec): min=1260, max=31584k, avg=4155.06, stdev=19949.96
    clat percentiles (usec):
     |  1.000000th=[    3],  5.000000th=[    3], 10.000000th=[    3],
     | 20.000000th=[    3], 30.000000th=[    4], 40.000000th=[    4],
     | 50.000000th=[    4], 60.000000th=[    5], 70.000000th=[    5],
     | 80.000000th=[    6], 90.000000th=[    7], 95.000000th=[    7],
     | 99.000000th=[    9], 99.500000th=[   10], 99.900000th=[   12],
     | 99.990000th=[   15], 99.999000th=[  161], 99.999900th=[ 1729],
     | 99.999990th=[31589], 99.999999th=[31589]
   bw (  KiB/s): min=63544, max=97448, per=24.99%, avg=84293.10, stdev=5999.66, samples=477
   iops        : min=15886, max=24362, avg=21073.25, stdev=1499.91, samples=477
  lat (nsec)   : 750=0.01%, 1000=0.01%
  lat (usec)   : 2=0.39%, 4=16.87%, 10=13.21%, 20=63.44%, 50=0.81%
  lat (usec)   : 100=5.01%, 250=0.21%, 500=0.06%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 50=0.01%
  cpu          : usr=6.77%, sys=38.91%, ctx=11780295, majf=0, minf=45
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=11808065,5058857,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=769MiB/s (806MB/s), 769MiB/s-769MiB/s (806MB/s-806MB/s), io=45.0GiB (48.4GB), run=60002-60002msec
  WRITE: bw=329MiB/s (345MB/s), 329MiB/s-329MiB/s (345MB/s-345MB/s), io=19.3GiB (20.7GB), run=60002-60002msec

Disk stats (read/write):
  nvme0n1: ios=11747770/4945814, merge=0/2321, ticks=150290/163386, in_queue=0, util=99.71%

3. 结果分析

那么这些值代表什么含义呢,简单来说,可以看到在此次IO下的latency 分布,如以上IO中,99.999%的延迟集中在490微秒。读写的延迟分布不一致。如果我们拿因特尔的傲腾系列做测试,读写延迟可能会一致。既然是延迟,那么对延迟的指标当然是要去越小越好。

4.影响QOS的原因有哪些

可以参照大神古猫的一篇博文
https://blog.csdn.net/zhuzongpeng/article/details/128782240

最后

以上就是可爱长颈鹿为你收集整理的【SSD测试专题四】性能之QOS的全部内容,希望文章能够帮你解决【SSD测试专题四】性能之QOS所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(67)

评论列表共有 0 条评论

立即
投稿
返回
顶部