Jump to content

Recommended Posts

Posted

I understand the difference between bandwidth, latency, and throughput. I wanted to ask, why does bandwidth have a unit of time?

I get that if packets were cars on a highway, latency would be the speed of each lane, bandwidth would be the number of lanes, and throughput would be the number of cars that got through in a given timespan.

If using the hose-and-water analogy, bandwidth would be the diameter of the hose. But with networks, bandwidth is not just a number of 'lanes' or a specific size, it's quantity per time (Gb/S, Mb/s, etc).

Why is this? And how is that not throughput?

Posted

Bandwidth is the expected theoretical capacity, like the diameter of a pipe. 1Gb/s for example. 


For some networks the time step is assumed to be the same for the whole network and thus omitted.  Which is why you may not expect to see it all of the time, but in utilities it is customary to maintain the time step on the units. 

Throughput has multiple meanings depending on the context, but it can be thought of as the actual maximum capacity and the instantaneous capacity. 

 The first being inefficiencies, such as aging copper cable or interference in wireless networks.   For the most part these are considered "actual" capacities.  You could think of this as a partially clogged pipe with sediment buildup or one with a small leak.

The 2nd being the instantaneous flow distribution.  In other words, you could have 1G/s capacity, but if the overall network is congested at that instant, 1G/s may not be available on the backhaul at that time.  In other words the network is not balanced.  The demand > supply.   On the contrary, designing a network to handle 100% of the bandwidth 100% of the time would be a waste.   If you turn only your bathtub on, it will have 100% flow as expected.  But if you turn on every other faucet in your home at one time,  the supply<demand.  And the flow at the bathtub may be greatly reduced.
 

Latency has more to do with time between arrivals.  Traditionally long-distance variable-position wireless communications will have high latency, e.g. satellites etc.  To obtain say 50mb/s with satellite, larger packet sizes will be needed because the latency is likely at least 7x that of a fixed position or copper-based network.  To accommodate this they must use proportionately greater packet sizes.  The resulting problem is then packet loss means a signicant increase in interruptions.  This can be thought of as another example of "actual" capacity vs theoretical max.  Keeping up with the plumbing analogy, maybe something like having a lot of air in the pipes :)

Posted
On 1/18/2022 at 8:54 PM, SarelErwee said:

I understand the difference between bandwidth, latency, and throughput. I wanted to ask, why does bandwidth have a unit of time?

I get that if packets were cars on a highway, latency would be the speed of each lane, bandwidth would be the number of lanes, and throughput would be the number of cars that got through in a given timespan https://showbox.tools/ https://speedtest.vet/.

If using the hose-and-water analogy, bandwidth would be the diameter of the hose. But with networks, bandwidth is not just a number of 'lanes' or a specific size, it's quantity per time (Gb/S, Mb/s, etc).

Why is this? And how is that not throughput?

I got this,...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use