概述
关于系统领域的论文中经常出现Latency, Throughput 和 Bandwidth.这个三个名词也经常用于Storage(disk, memory), network(Ethernet, RDMA) 和 software parts, 需多次测量取平均值。
Latency
Latency is the time required to transmit a packet across a network.
Latency may be measured in many different ways: round trip, one way, etc. Latency may be impacted by any element in the chain which is used to transmit data: workstation, WAN links, routers, local area network (LAN), server,… and ultimately it may be limited, in the case of very large networks, by the speed of light.
Measuring network performance: TCP
TCP is directly impacted by latency
TCP is a more complex protocol as it integrates a mechanism which checks that all packets are correctly delivered. This mechanism is called acknowledgment: it consists of having the receiver transmit a specific packet or flag to the sender to confirm the proper reception of a packet.
TCP Congestion Window
For efficiency purposes, not all packets will be acknowledged one by one; the sender does not wait for each acknowledgment before sending new packets. Indeed, the number of packets that may be sent before receiving the corresponding acknowledgement packet is managed by a value called TCP congestion window.
How the TCP congestion window impacts throughput
If we make the hypothesis that no packet gets lost; the sender will send the first quota of packets (corresponding to the TCP congestion window) and when it will receive the acknowledgment packet, it will increase the TCP congestion window; progressively the number of packets that can be sent in a given period of time will increase (throughput). The delay before acknowledgment packets are received (= latency) will have an impact on how fast the TCP congestion window increases (hence the throughput).
When latency is high, it means that the sender spends more time idle (not sending any new packets), which reduces how fast throughput grows.
TCP中Latency可以理解为:send: The delay before acknowledgment packets are received (= latency)
Throughput: the number of packets that can be sent in a given period of time
Throughput
单位时间内通过某个网络(信道,接口)实际数据量,可以理解为获得的实际带宽。显然吞吐量不会大于带宽或者额定速率,100Mbps的以太网,其吞吐量上限值也为100Mbps
Throughput is defined as the quantity of data being sent/received by unit of time (MB/s)
Bandwidth
计算机网络中的主机在数字信道上,单位时间内从一端传送到另一端的最大数据量,即最大速率。(MB/s)
write request to server repeatedly to get average write bandwidth.
IOPS
This means IO operations per second, which means the amount of read or write operations that could be done in one seconds time. A certain amount of IO operations will also give a certain throughput of Megabytes each second, so these two are related. A third factor is however involved: the size of each IO request. Depending on the operating system and the application/service that needs disk access it will issue a request to read or write a certain amount of data at the same time. This is called the IO size and could be for example 4 KB, 8 KB, 32 KB and so on.
Average IO size x IOPS = Throughput in MB/s
Reference:
1.Measuring network performance: links between latency, throughput and packet loss
2.计算机网络速率,带宽,吞吐量概念
最后
以上就是英勇西装为你收集整理的延迟,吞吐和带宽的全部内容,希望文章能够帮你解决延迟,吞吐和带宽所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
发表评论 取消回复